Test Report: QEMU_macOS 19868

                    
                      7e440490692625b78ba9b7da2770c31edaec7633:2024-10-25:36808
                    
                

Test fail (98/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.84
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.95
48 TestCertOptions 10.22
49 TestCertExpiration 195.37
50 TestDockerFlags 10.18
51 TestForceSystemdFlag 10.07
52 TestForceSystemdEnv 11.51
97 TestFunctional/parallel/ServiceCmdConnect 35.42
162 TestMultiControlPlane/serial/StartCluster 725.38
163 TestMultiControlPlane/serial/DeployApp 113.37
164 TestMultiControlPlane/serial/PingHostFromPods 0.1
165 TestMultiControlPlane/serial/AddWorkerNode 0.09
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
169 TestMultiControlPlane/serial/StopSecondaryNode 0.12
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
171 TestMultiControlPlane/serial/RestartSecondaryNode 0.15
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 960.47
184 TestJSONOutput/start/Command 725.26
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/pause/Command 0.09
196 TestJSONOutput/unpause/Command 0.06
216 TestMountStart/serial/StartWithMountFirst 10.09
219 TestMultiNode/serial/FreshStart2Nodes 9.96
220 TestMultiNode/serial/DeployApp2Nodes 80.17
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.08
223 TestMultiNode/serial/MultiNodeLabels 0.07
224 TestMultiNode/serial/ProfileList 0.09
225 TestMultiNode/serial/CopyFile 0.07
226 TestMultiNode/serial/StopNode 0.15
227 TestMultiNode/serial/StartAfterStop 48.12
228 TestMultiNode/serial/RestartKeepsNodes 8.48
229 TestMultiNode/serial/DeleteNode 0.11
230 TestMultiNode/serial/StopMultiNode 2.18
231 TestMultiNode/serial/RestartMultiNode 5.27
232 TestMultiNode/serial/ValidateNameConflict 20.12
236 TestPreload 10.25
238 TestScheduledStopUnix 10.2
239 TestSkaffold 12.63
242 TestRunningBinaryUpgrade 599.16
244 TestKubernetesUpgrade 19.11
257 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.11
258 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 0.98
260 TestStoppedBinaryUpgrade/Upgrade 575.81
262 TestPause/serial/Start 9.98
272 TestNoKubernetes/serial/StartWithK8s 9.86
273 TestNoKubernetes/serial/StartWithStopK8s 5.3
274 TestNoKubernetes/serial/Start 5.33
278 TestNoKubernetes/serial/StartNoArgs 5.32
280 TestNetworkPlugins/group/auto/Start 9.81
281 TestNetworkPlugins/group/kindnet/Start 9.85
282 TestNetworkPlugins/group/calico/Start 9.8
283 TestNetworkPlugins/group/custom-flannel/Start 9.86
284 TestNetworkPlugins/group/false/Start 9.87
285 TestNetworkPlugins/group/enable-default-cni/Start 9.87
286 TestNetworkPlugins/group/flannel/Start 9.84
287 TestNetworkPlugins/group/bridge/Start 9.97
288 TestNetworkPlugins/group/kubenet/Start 9.76
291 TestStartStop/group/old-k8s-version/serial/FirstStart 9.82
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.28
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
300 TestStartStop/group/old-k8s-version/serial/Pause 0.11
302 TestStartStop/group/no-preload/serial/FirstStart 9.91
303 TestStartStop/group/no-preload/serial/DeployApp 0.1
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.13
307 TestStartStop/group/embed-certs/serial/FirstStart 9.92
309 TestStartStop/group/no-preload/serial/SecondStart 5.33
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.07
312 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.09
313 TestStartStop/group/no-preload/serial/Pause 0.12
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.82
316 TestStartStop/group/embed-certs/serial/DeployApp 0.11
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.13
320 TestStartStop/group/embed-certs/serial/SecondStart 7.68
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.11
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
323 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.07
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.13
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.09
326 TestStartStop/group/embed-certs/serial/Pause 0.12
329 TestStartStop/group/newest-cni/serial/FirstStart 10.1
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.9
332 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.04
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.07
334 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
335 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
340 TestStartStop/group/newest-cni/serial/SecondStart 5.27
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
344 TestStartStop/group/newest-cni/serial/Pause 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (14.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-797000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-797000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (14.843214666s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"60067216-ade6-477a-9654-b8bc56f65b51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-797000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1e891096-4d7e-45a1-b6dd-208b729b8927","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19868"}}
	{"specversion":"1.0","id":"6f6ffd99-ae5b-4f41-bc14-1e916abee5d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig"}}
	{"specversion":"1.0","id":"570468ce-df4a-4150-895b-e33aee8f454b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"90e032b1-1f08-4df9-91f0-adcda258592e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"aea2f40c-dcac-48f4-9fa3-bbf87fb7c7e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube"}}
	{"specversion":"1.0","id":"c9095b7e-898e-4666-8751-db8c7cf92374","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"17b3a223-310f-49e9-b726-85f8b89aff5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d128d21d-d170-448f-ad3b-25707b3ad298","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"d860bfb6-fc24-44f3-8675-2f46ebd74dd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6bc8d062-ba27-4136-a7e0-6df0a6bd0e62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-797000\" primary control-plane node in \"download-only-797000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"12e5c8c7-870f-4b20-9542-944f31437ce8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cb6e14a1-c6c6-463f-86d8-21fff8d71185","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1073b5320 0x1073b5320 0x1073b5320 0x1073b5320 0x1073b5320 0x1073b5320 0x1073b5320] Decompressors:map[bz2:0x14000886210 gz:0x14000886218 tar:0x140008861c0 tar.bz2:0x140008861d0 tar.gz:0x140008861e0 tar.xz:0x140008861f0 tar.zst:0x14000886200 tbz2:0x140008861d0 tgz:0x14
0008861e0 txz:0x140008861f0 tzst:0x14000886200 xz:0x14000886220 zip:0x14000886230 zst:0x14000886228] Getters:map[file:0x14000ac2870 http:0x140008740a0 https:0x140008740f0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"bfaaba97-5da2-41da-8a06-ee901180a8d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 17:42:39.571798    1673 out.go:345] Setting OutFile to fd 1 ...
	I1025 17:42:39.571965    1673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 17:42:39.571969    1673 out.go:358] Setting ErrFile to fd 2...
	I1025 17:42:39.571971    1673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 17:42:39.572085    1673 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	W1025 17:42:39.572175    1673 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19868-1112/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19868-1112/.minikube/config/config.json: no such file or directory
	I1025 17:42:39.573560    1673 out.go:352] Setting JSON to true
	I1025 17:42:39.592908    1673 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":730,"bootTime":1729902629,"procs":559,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 17:42:39.592980    1673 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 17:42:39.597472    1673 out.go:97] [download-only-797000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 17:42:39.597613    1673 notify.go:220] Checking for updates...
	W1025 17:42:39.597653    1673 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 17:42:39.600412    1673 out.go:169] MINIKUBE_LOCATION=19868
	I1025 17:42:39.605317    1673 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 17:42:39.609481    1673 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 17:42:39.612450    1673 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 17:42:39.613972    1673 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	W1025 17:42:39.620463    1673 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 17:42:39.620698    1673 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 17:42:39.624432    1673 out.go:97] Using the qemu2 driver based on user configuration
	I1025 17:42:39.624452    1673 start.go:297] selected driver: qemu2
	I1025 17:42:39.624466    1673 start.go:901] validating driver "qemu2" against <nil>
	I1025 17:42:39.624525    1673 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 17:42:39.628423    1673 out.go:169] Automatically selected the socket_vmnet network
	I1025 17:42:39.635424    1673 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1025 17:42:39.635534    1673 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 17:42:39.635596    1673 cni.go:84] Creating CNI manager for ""
	I1025 17:42:39.635641    1673 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 17:42:39.635700    1673 start.go:340] cluster config:
	{Name:download-only-797000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 17:42:39.640205    1673 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 17:42:39.643371    1673 out.go:97] Downloading VM boot image ...
	I1025 17:42:39.643409    1673 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso
	I1025 17:42:45.919368    1673 out.go:97] Starting "download-only-797000" primary control-plane node in "download-only-797000" cluster
	I1025 17:42:45.919411    1673 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1025 17:42:45.977095    1673 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1025 17:42:45.977117    1673 cache.go:56] Caching tarball of preloaded images
	I1025 17:42:45.977316    1673 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1025 17:42:45.981410    1673 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1025 17:42:45.981416    1673 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1025 17:42:46.062299    1673 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1025 17:42:53.053641    1673 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1025 17:42:53.053803    1673 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1025 17:42:53.765875    1673 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1025 17:42:53.766083    1673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/download-only-797000/config.json ...
	I1025 17:42:53.766100    1673 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/download-only-797000/config.json: {Name:mk10c10bf644c4c9b3237622517f91c78f3b9cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:42:53.766364    1673 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1025 17:42:53.766614    1673 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1025 17:42:54.334703    1673 out.go:193] 
	W1025 17:42:54.338897    1673 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1073b5320 0x1073b5320 0x1073b5320 0x1073b5320 0x1073b5320 0x1073b5320 0x1073b5320] Decompressors:map[bz2:0x14000886210 gz:0x14000886218 tar:0x140008861c0 tar.bz2:0x140008861d0 tar.gz:0x140008861e0 tar.xz:0x140008861f0 tar.zst:0x14000886200 tbz2:0x140008861d0 tgz:0x140008861e0 txz:0x140008861f0 tzst:0x14000886200 xz:0x14000886220 zip:0x14000886230 zst:0x14000886228] Getters:map[file:0x14000ac2870 http:0x140008740a0 https:0x140008740f0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1025 17:42:54.338925    1673 out_reason.go:110] 
	W1025 17:42:54.346861    1673 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 17:42:54.350724    1673 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-797000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (14.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.95s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-347000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-347000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.793986375s)

                                                
                                                
-- stdout --
	* [offline-docker-347000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-347000" primary control-plane node in "offline-docker-347000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-347000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:41:28.631142    4271 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:41:28.631313    4271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:41:28.631317    4271 out.go:358] Setting ErrFile to fd 2...
	I1025 18:41:28.631327    4271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:41:28.631479    4271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:41:28.632696    4271 out.go:352] Setting JSON to false
	I1025 18:41:28.652457    4271 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4259,"bootTime":1729902629,"procs":554,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:41:28.652545    4271 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:41:28.657321    4271 out.go:177] * [offline-docker-347000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:41:28.665236    4271 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:41:28.665232    4271 notify.go:220] Checking for updates...
	I1025 18:41:28.671210    4271 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:41:28.674155    4271 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:41:28.677145    4271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:41:28.680175    4271 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:41:28.683237    4271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:41:28.684744    4271 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:41:28.684799    4271 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:41:28.689122    4271 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:41:28.696073    4271 start.go:297] selected driver: qemu2
	I1025 18:41:28.696084    4271 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:41:28.696092    4271 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:41:28.698298    4271 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:41:28.701136    4271 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:41:28.704237    4271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:41:28.704257    4271 cni.go:84] Creating CNI manager for ""
	I1025 18:41:28.704278    4271 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:41:28.704284    4271 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 18:41:28.704323    4271 start.go:340] cluster config:
	{Name:offline-docker-347000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-347000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:41:28.709146    4271 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:41:28.717167    4271 out.go:177] * Starting "offline-docker-347000" primary control-plane node in "offline-docker-347000" cluster
	I1025 18:41:28.721175    4271 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:41:28.721213    4271 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:41:28.721222    4271 cache.go:56] Caching tarball of preloaded images
	I1025 18:41:28.721315    4271 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:41:28.721329    4271 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:41:28.721393    4271 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/offline-docker-347000/config.json ...
	I1025 18:41:28.721403    4271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/offline-docker-347000/config.json: {Name:mk5b9eb91b60d487182f2d453b3e36ca09d4dda0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:41:28.721710    4271 start.go:360] acquireMachinesLock for offline-docker-347000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:41:28.721757    4271 start.go:364] duration metric: took 37.416µs to acquireMachinesLock for "offline-docker-347000"
	I1025 18:41:28.721769    4271 start.go:93] Provisioning new machine with config: &{Name:offline-docker-347000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-347000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:41:28.721810    4271 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:41:28.726168    4271 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 18:41:28.741450    4271 start.go:159] libmachine.API.Create for "offline-docker-347000" (driver="qemu2")
	I1025 18:41:28.741482    4271 client.go:168] LocalClient.Create starting
	I1025 18:41:28.741562    4271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:41:28.741607    4271 main.go:141] libmachine: Decoding PEM data...
	I1025 18:41:28.741619    4271 main.go:141] libmachine: Parsing certificate...
	I1025 18:41:28.741663    4271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:41:28.741692    4271 main.go:141] libmachine: Decoding PEM data...
	I1025 18:41:28.741702    4271 main.go:141] libmachine: Parsing certificate...
	I1025 18:41:28.742115    4271 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:41:28.898003    4271 main.go:141] libmachine: Creating SSH key...
	I1025 18:41:28.974178    4271 main.go:141] libmachine: Creating Disk image...
	I1025 18:41:28.974190    4271 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:41:28.974413    4271 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/offline-docker-347000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/offline-docker-347000/disk.qcow2
	I1025 18:41:28.985526    4271 main.go:141] libmachine: STDOUT: 
	I1025 18:41:28.985555    4271 main.go:141] libmachine: STDERR: 
	I1025 18:41:28.985656    4271 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/offline-docker-347000/disk.qcow2 +20000M
	I1025 18:41:28.995334    4271 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:41:28.995360    4271 main.go:141] libmachine: STDERR: 
	I1025 18:41:28.995383    4271 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/offline-docker-347000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/offline-docker-347000/disk.qcow2
	I1025 18:41:28.995389    4271 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:41:28.995403    4271 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:41:28.995438    4271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/offline-docker-347000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/offline-docker-347000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/offline-docker-347000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:bf:75:2f:60:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/offline-docker-347000/disk.qcow2
	I1025 18:41:28.997323    4271 main.go:141] libmachine: STDOUT: 
	I1025 18:41:28.997337    4271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:41:28.997354    4271 client.go:171] duration metric: took 255.8725ms to LocalClient.Create
	I1025 18:41:30.999391    4271 start.go:128] duration metric: took 2.277621458s to createHost
	I1025 18:41:30.999410    4271 start.go:83] releasing machines lock for "offline-docker-347000", held for 2.277696s
	W1025 18:41:30.999427    4271 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:41:31.008978    4271 out.go:177] * Deleting "offline-docker-347000" in qemu2 ...
	W1025 18:41:31.017883    4271 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:41:31.017891    4271 start.go:729] Will try again in 5 seconds ...
	I1025 18:41:36.020045    4271 start.go:360] acquireMachinesLock for offline-docker-347000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:41:36.020495    4271 start.go:364] duration metric: took 322.917µs to acquireMachinesLock for "offline-docker-347000"
	I1025 18:41:36.020573    4271 start.go:93] Provisioning new machine with config: &{Name:offline-docker-347000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-347000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:41:36.020768    4271 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:41:36.030240    4271 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 18:41:36.071449    4271 start.go:159] libmachine.API.Create for "offline-docker-347000" (driver="qemu2")
	I1025 18:41:36.071499    4271 client.go:168] LocalClient.Create starting
	I1025 18:41:36.071634    4271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:41:36.071703    4271 main.go:141] libmachine: Decoding PEM data...
	I1025 18:41:36.071718    4271 main.go:141] libmachine: Parsing certificate...
	I1025 18:41:36.071780    4271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:41:36.071837    4271 main.go:141] libmachine: Decoding PEM data...
	I1025 18:41:36.071847    4271 main.go:141] libmachine: Parsing certificate...
	I1025 18:41:36.072369    4271 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:41:36.236208    4271 main.go:141] libmachine: Creating SSH key...
	I1025 18:41:36.319327    4271 main.go:141] libmachine: Creating Disk image...
	I1025 18:41:36.319334    4271 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:41:36.319516    4271 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/offline-docker-347000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/offline-docker-347000/disk.qcow2
	I1025 18:41:36.329447    4271 main.go:141] libmachine: STDOUT: 
	I1025 18:41:36.329472    4271 main.go:141] libmachine: STDERR: 
	I1025 18:41:36.329529    4271 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/offline-docker-347000/disk.qcow2 +20000M
	I1025 18:41:36.338402    4271 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:41:36.338458    4271 main.go:141] libmachine: STDERR: 
	I1025 18:41:36.338480    4271 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/offline-docker-347000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/offline-docker-347000/disk.qcow2
	I1025 18:41:36.338485    4271 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:41:36.338498    4271 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:41:36.338524    4271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/offline-docker-347000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/offline-docker-347000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/offline-docker-347000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:7a:9a:00:61:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/offline-docker-347000/disk.qcow2
	I1025 18:41:36.340419    4271 main.go:141] libmachine: STDOUT: 
	I1025 18:41:36.340465    4271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:41:36.340477    4271 client.go:171] duration metric: took 268.979084ms to LocalClient.Create
	I1025 18:41:38.342673    4271 start.go:128] duration metric: took 2.321916875s to createHost
	I1025 18:41:38.342764    4271 start.go:83] releasing machines lock for "offline-docker-347000", held for 2.3222995s
	W1025 18:41:38.343199    4271 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-347000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-347000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:41:38.357915    4271 out.go:201] 
	W1025 18:41:38.361048    4271 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:41:38.361072    4271 out.go:270] * 
	* 
	W1025 18:41:38.363901    4271 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:41:38.375861    4271 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-347000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-10-25 18:41:38.392735 -0700 PDT m=+3539.000017001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-347000 -n offline-docker-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-347000 -n offline-docker-347000: exit status 7 (71.651375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-347000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-347000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-347000
--- FAIL: TestOffline (9.95s)

                                                
                                    
x
+
TestCertOptions (10.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-814000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-814000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.944336333s)

                                                
                                                
-- stdout --
	* [cert-options-814000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-814000" primary control-plane node in "cert-options-814000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-814000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-814000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-814000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-814000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-814000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (84.335875ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-814000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-814000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-814000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-814000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-814000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (46.262917ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-814000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-814000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-814000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-814000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-10-25 18:42:10.34987 -0700 PDT m=+3570.957817293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-814000 -n cert-options-814000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-814000 -n cert-options-814000: exit status 7 (34.419208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-814000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-814000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-814000
--- FAIL: TestCertOptions (10.22s)

                                                
                                    
x
+
TestCertExpiration (195.37s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-614000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-614000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.002248583s)

                                                
                                                
-- stdout --
	* [cert-expiration-614000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-614000" primary control-plane node in "cert-expiration-614000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-614000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-614000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-614000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-614000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-614000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.2320295s)

                                                
                                                
-- stdout --
	* [cert-expiration-614000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-614000" primary control-plane node in "cert-expiration-614000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-614000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-614000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-614000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-614000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-614000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-614000" primary control-plane node in "cert-expiration-614000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-614000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-614000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-614000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-10-25 18:45:10.415445 -0700 PDT m=+3751.027138209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-614000 -n cert-expiration-614000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-614000 -n cert-expiration-614000: exit status 7 (47.751583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-614000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-614000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-614000
--- FAIL: TestCertExpiration (195.37s)

                                                
                                    
x
+
TestDockerFlags (10.18s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-922000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-922000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.930386917s)

                                                
                                                
-- stdout --
	* [docker-flags-922000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-922000" primary control-plane node in "docker-flags-922000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-922000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:41:50.093469    4471 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:41:50.093624    4471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:41:50.093628    4471 out.go:358] Setting ErrFile to fd 2...
	I1025 18:41:50.093630    4471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:41:50.093770    4471 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:41:50.094937    4471 out.go:352] Setting JSON to false
	I1025 18:41:50.112564    4471 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4281,"bootTime":1729902629,"procs":559,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:41:50.112633    4471 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:41:50.118061    4471 out.go:177] * [docker-flags-922000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:41:50.126009    4471 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:41:50.126078    4471 notify.go:220] Checking for updates...
	I1025 18:41:50.132990    4471 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:41:50.135962    4471 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:41:50.138982    4471 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:41:50.142069    4471 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:41:50.147960    4471 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:41:50.151313    4471 config.go:182] Loaded profile config "force-systemd-flag-194000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:41:50.151387    4471 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:41:50.151436    4471 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:41:50.155874    4471 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:41:50.162984    4471 start.go:297] selected driver: qemu2
	I1025 18:41:50.162990    4471 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:41:50.162998    4471 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:41:50.165605    4471 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:41:50.168957    4471 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:41:50.172064    4471 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1025 18:41:50.172089    4471 cni.go:84] Creating CNI manager for ""
	I1025 18:41:50.172120    4471 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:41:50.172125    4471 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 18:41:50.172151    4471 start.go:340] cluster config:
	{Name:docker-flags-922000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-922000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:41:50.176845    4471 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:41:50.184975    4471 out.go:177] * Starting "docker-flags-922000" primary control-plane node in "docker-flags-922000" cluster
	I1025 18:41:50.188958    4471 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:41:50.188972    4471 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:41:50.188983    4471 cache.go:56] Caching tarball of preloaded images
	I1025 18:41:50.189064    4471 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:41:50.189070    4471 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:41:50.189127    4471 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/docker-flags-922000/config.json ...
	I1025 18:41:50.189138    4471 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/docker-flags-922000/config.json: {Name:mkf27a287db1f7357860fa5b6726d4193ba006c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:41:50.189524    4471 start.go:360] acquireMachinesLock for docker-flags-922000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:41:50.189579    4471 start.go:364] duration metric: took 44.959µs to acquireMachinesLock for "docker-flags-922000"
	I1025 18:41:50.189593    4471 start.go:93] Provisioning new machine with config: &{Name:docker-flags-922000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-922000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:41:50.189619    4471 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:41:50.197949    4471 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 18:41:50.215797    4471 start.go:159] libmachine.API.Create for "docker-flags-922000" (driver="qemu2")
	I1025 18:41:50.215829    4471 client.go:168] LocalClient.Create starting
	I1025 18:41:50.215899    4471 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:41:50.215936    4471 main.go:141] libmachine: Decoding PEM data...
	I1025 18:41:50.215948    4471 main.go:141] libmachine: Parsing certificate...
	I1025 18:41:50.215986    4471 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:41:50.216017    4471 main.go:141] libmachine: Decoding PEM data...
	I1025 18:41:50.216030    4471 main.go:141] libmachine: Parsing certificate...
	I1025 18:41:50.216453    4471 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:41:50.372394    4471 main.go:141] libmachine: Creating SSH key...
	I1025 18:41:50.424986    4471 main.go:141] libmachine: Creating Disk image...
	I1025 18:41:50.424992    4471 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:41:50.425177    4471 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/docker-flags-922000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/docker-flags-922000/disk.qcow2
	I1025 18:41:50.435031    4471 main.go:141] libmachine: STDOUT: 
	I1025 18:41:50.435053    4471 main.go:141] libmachine: STDERR: 
	I1025 18:41:50.435108    4471 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/docker-flags-922000/disk.qcow2 +20000M
	I1025 18:41:50.443527    4471 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:41:50.443544    4471 main.go:141] libmachine: STDERR: 
	I1025 18:41:50.443564    4471 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/docker-flags-922000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/docker-flags-922000/disk.qcow2
	I1025 18:41:50.443569    4471 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:41:50.443579    4471 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:41:50.443612    4471 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/docker-flags-922000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/docker-flags-922000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/docker-flags-922000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:14:8c:f4:47:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/docker-flags-922000/disk.qcow2
	I1025 18:41:50.445410    4471 main.go:141] libmachine: STDOUT: 
	I1025 18:41:50.445430    4471 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:41:50.445448    4471 client.go:171] duration metric: took 229.618083ms to LocalClient.Create
	I1025 18:41:52.447590    4471 start.go:128] duration metric: took 2.257996709s to createHost
	I1025 18:41:52.447650    4471 start.go:83] releasing machines lock for "docker-flags-922000", held for 2.258107791s
	W1025 18:41:52.447702    4471 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:41:52.453954    4471 out.go:177] * Deleting "docker-flags-922000" in qemu2 ...
	W1025 18:41:52.482437    4471 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:41:52.482462    4471 start.go:729] Will try again in 5 seconds ...
	I1025 18:41:57.484544    4471 start.go:360] acquireMachinesLock for docker-flags-922000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:41:57.601000    4471 start.go:364] duration metric: took 116.36575ms to acquireMachinesLock for "docker-flags-922000"
	I1025 18:41:57.601173    4471 start.go:93] Provisioning new machine with config: &{Name:docker-flags-922000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-922000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:41:57.601467    4471 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:41:57.612106    4471 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 18:41:57.661952    4471 start.go:159] libmachine.API.Create for "docker-flags-922000" (driver="qemu2")
	I1025 18:41:57.662002    4471 client.go:168] LocalClient.Create starting
	I1025 18:41:57.662167    4471 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:41:57.662244    4471 main.go:141] libmachine: Decoding PEM data...
	I1025 18:41:57.662270    4471 main.go:141] libmachine: Parsing certificate...
	I1025 18:41:57.662331    4471 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:41:57.662391    4471 main.go:141] libmachine: Decoding PEM data...
	I1025 18:41:57.662408    4471 main.go:141] libmachine: Parsing certificate...
	I1025 18:41:57.662944    4471 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:41:57.832056    4471 main.go:141] libmachine: Creating SSH key...
	I1025 18:41:57.919797    4471 main.go:141] libmachine: Creating Disk image...
	I1025 18:41:57.919808    4471 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:41:57.920007    4471 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/docker-flags-922000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/docker-flags-922000/disk.qcow2
	I1025 18:41:57.930273    4471 main.go:141] libmachine: STDOUT: 
	I1025 18:41:57.930299    4471 main.go:141] libmachine: STDERR: 
	I1025 18:41:57.930352    4471 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/docker-flags-922000/disk.qcow2 +20000M
	I1025 18:41:57.938816    4471 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:41:57.938837    4471 main.go:141] libmachine: STDERR: 
	I1025 18:41:57.938848    4471 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/docker-flags-922000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/docker-flags-922000/disk.qcow2
	I1025 18:41:57.938854    4471 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:41:57.938863    4471 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:41:57.938910    4471 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/docker-flags-922000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/docker-flags-922000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/docker-flags-922000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:bb:7d:a1:26:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/docker-flags-922000/disk.qcow2
	I1025 18:41:57.940693    4471 main.go:141] libmachine: STDOUT: 
	I1025 18:41:57.940705    4471 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:41:57.940717    4471 client.go:171] duration metric: took 278.716083ms to LocalClient.Create
	I1025 18:41:59.942852    4471 start.go:128] duration metric: took 2.341403333s to createHost
	I1025 18:41:59.942965    4471 start.go:83] releasing machines lock for "docker-flags-922000", held for 2.34193775s
	W1025 18:41:59.943365    4471 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-922000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-922000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:41:59.956023    4471 out.go:201] 
	W1025 18:41:59.965972    4471 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:41:59.965998    4471 out.go:270] * 
	* 
	W1025 18:41:59.968864    4471 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:41:59.976006    4471 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-922000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-922000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-922000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (87.513666ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-922000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-922000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-922000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-922000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-922000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-922000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-922000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-922000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-922000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (46.627875ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-922000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-922000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-922000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-922000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-922000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-922000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-10-25 18:42:00.128832 -0700 PDT m=+3560.736566251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-922000 -n docker-flags-922000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-922000 -n docker-flags-922000: exit status 7 (33.18575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-922000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-922000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-922000
--- FAIL: TestDockerFlags (10.18s)

                                                
                                    
x
+
TestForceSystemdFlag (10.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-194000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-194000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.867827166s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-194000" primary control-plane node in "force-systemd-flag-194000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-194000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:41:45.144491    4446 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:41:45.144639    4446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:41:45.144643    4446 out.go:358] Setting ErrFile to fd 2...
	I1025 18:41:45.144645    4446 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:41:45.144768    4446 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:41:45.146246    4446 out.go:352] Setting JSON to false
	I1025 18:41:45.164063    4446 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4276,"bootTime":1729902629,"procs":555,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:41:45.164143    4446 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:41:45.170450    4446 out.go:177] * [force-systemd-flag-194000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:41:45.179198    4446 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:41:45.179287    4446 notify.go:220] Checking for updates...
	I1025 18:41:45.186060    4446 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:41:45.189126    4446 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:41:45.192181    4446 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:41:45.195094    4446 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:41:45.198142    4446 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:41:45.201485    4446 config.go:182] Loaded profile config "force-systemd-env-390000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:41:45.201571    4446 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:41:45.201639    4446 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:41:45.206165    4446 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:41:45.213144    4446 start.go:297] selected driver: qemu2
	I1025 18:41:45.213150    4446 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:41:45.213156    4446 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:41:45.215616    4446 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:41:45.219105    4446 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:41:45.222192    4446 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 18:41:45.222209    4446 cni.go:84] Creating CNI manager for ""
	I1025 18:41:45.222233    4446 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:41:45.222241    4446 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 18:41:45.222278    4446 start.go:340] cluster config:
	{Name:force-systemd-flag-194000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:41:45.227127    4446 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:41:45.235110    4446 out.go:177] * Starting "force-systemd-flag-194000" primary control-plane node in "force-systemd-flag-194000" cluster
	I1025 18:41:45.243143    4446 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:41:45.243162    4446 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:41:45.243172    4446 cache.go:56] Caching tarball of preloaded images
	I1025 18:41:45.243257    4446 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:41:45.243263    4446 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:41:45.243319    4446 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/force-systemd-flag-194000/config.json ...
	I1025 18:41:45.243330    4446 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/force-systemd-flag-194000/config.json: {Name:mk38e6172054bfba502b5c86d7a934c0e7ee0cf9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:41:45.243775    4446 start.go:360] acquireMachinesLock for force-systemd-flag-194000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:41:45.243828    4446 start.go:364] duration metric: took 45.667µs to acquireMachinesLock for "force-systemd-flag-194000"
	I1025 18:41:45.243841    4446 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:41:45.243883    4446 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:41:45.251105    4446 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 18:41:45.268699    4446 start.go:159] libmachine.API.Create for "force-systemd-flag-194000" (driver="qemu2")
	I1025 18:41:45.268732    4446 client.go:168] LocalClient.Create starting
	I1025 18:41:45.268811    4446 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:41:45.268855    4446 main.go:141] libmachine: Decoding PEM data...
	I1025 18:41:45.268867    4446 main.go:141] libmachine: Parsing certificate...
	I1025 18:41:45.268909    4446 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:41:45.268942    4446 main.go:141] libmachine: Decoding PEM data...
	I1025 18:41:45.268950    4446 main.go:141] libmachine: Parsing certificate...
	I1025 18:41:45.269453    4446 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:41:45.424819    4446 main.go:141] libmachine: Creating SSH key...
	I1025 18:41:45.451153    4446 main.go:141] libmachine: Creating Disk image...
	I1025 18:41:45.451158    4446 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:41:45.451335    4446 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-flag-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-flag-194000/disk.qcow2
	I1025 18:41:45.461120    4446 main.go:141] libmachine: STDOUT: 
	I1025 18:41:45.461142    4446 main.go:141] libmachine: STDERR: 
	I1025 18:41:45.461204    4446 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-flag-194000/disk.qcow2 +20000M
	I1025 18:41:45.469730    4446 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:41:45.469753    4446 main.go:141] libmachine: STDERR: 
	I1025 18:41:45.469769    4446 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-flag-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-flag-194000/disk.qcow2
	I1025 18:41:45.469774    4446 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:41:45.469787    4446 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:41:45.469817    4446 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-flag-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-flag-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-flag-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:aa:95:5e:d5:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-flag-194000/disk.qcow2
	I1025 18:41:45.471622    4446 main.go:141] libmachine: STDOUT: 
	I1025 18:41:45.471637    4446 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:41:45.471657    4446 client.go:171] duration metric: took 202.922083ms to LocalClient.Create
	I1025 18:41:47.473829    4446 start.go:128] duration metric: took 2.229957458s to createHost
	I1025 18:41:47.473914    4446 start.go:83] releasing machines lock for "force-systemd-flag-194000", held for 2.230122625s
	W1025 18:41:47.473969    4446 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:41:47.497279    4446 out.go:177] * Deleting "force-systemd-flag-194000" in qemu2 ...
	W1025 18:41:47.518123    4446 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:41:47.518145    4446 start.go:729] Will try again in 5 seconds ...
	I1025 18:41:52.520243    4446 start.go:360] acquireMachinesLock for force-systemd-flag-194000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:41:52.520755    4446 start.go:364] duration metric: took 360.708µs to acquireMachinesLock for "force-systemd-flag-194000"
	I1025 18:41:52.520839    4446 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-194000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:41:52.521093    4446 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:41:52.529835    4446 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 18:41:52.578746    4446 start.go:159] libmachine.API.Create for "force-systemd-flag-194000" (driver="qemu2")
	I1025 18:41:52.578796    4446 client.go:168] LocalClient.Create starting
	I1025 18:41:52.578938    4446 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:41:52.579036    4446 main.go:141] libmachine: Decoding PEM data...
	I1025 18:41:52.579054    4446 main.go:141] libmachine: Parsing certificate...
	I1025 18:41:52.579118    4446 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:41:52.579187    4446 main.go:141] libmachine: Decoding PEM data...
	I1025 18:41:52.579201    4446 main.go:141] libmachine: Parsing certificate...
	I1025 18:41:52.580089    4446 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:41:52.750409    4446 main.go:141] libmachine: Creating SSH key...
	I1025 18:41:52.911256    4446 main.go:141] libmachine: Creating Disk image...
	I1025 18:41:52.911263    4446 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:41:52.911479    4446 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-flag-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-flag-194000/disk.qcow2
	I1025 18:41:52.921722    4446 main.go:141] libmachine: STDOUT: 
	I1025 18:41:52.921807    4446 main.go:141] libmachine: STDERR: 
	I1025 18:41:52.921866    4446 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-flag-194000/disk.qcow2 +20000M
	I1025 18:41:52.930332    4446 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:41:52.930395    4446 main.go:141] libmachine: STDERR: 
	I1025 18:41:52.930408    4446 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-flag-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-flag-194000/disk.qcow2
	I1025 18:41:52.930414    4446 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:41:52.930422    4446 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:41:52.930449    4446 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-flag-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-flag-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-flag-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:b0:90:99:47:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-flag-194000/disk.qcow2
	I1025 18:41:52.932228    4446 main.go:141] libmachine: STDOUT: 
	I1025 18:41:52.932242    4446 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:41:52.932254    4446 client.go:171] duration metric: took 353.459667ms to LocalClient.Create
	I1025 18:41:54.934387    4446 start.go:128] duration metric: took 2.413307959s to createHost
	I1025 18:41:54.934484    4446 start.go:83] releasing machines lock for "force-systemd-flag-194000", held for 2.413755125s
	W1025 18:41:54.934832    4446 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:41:54.948467    4446 out.go:201] 
	W1025 18:41:54.952407    4446 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:41:54.952434    4446 out.go:270] * 
	* 
	W1025 18:41:54.954770    4446 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:41:54.966389    4446 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-194000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-194000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-194000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.57125ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-194000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-194000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-194000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-10-25 18:41:55.064735 -0700 PDT m=+3555.672363501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-194000 -n force-systemd-flag-194000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-194000 -n force-systemd-flag-194000: exit status 7 (37.161291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-194000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-194000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-194000
--- FAIL: TestForceSystemdFlag (10.07s)

                                                
                                    
x
+
TestForceSystemdEnv (11.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-390000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1025 18:41:39.765227    1672 install.go:79] stdout: 
W1025 18:41:39.765430    1672 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate220630889/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate220630889/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1025 18:41:39.765456    1672 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate220630889/001/docker-machine-driver-hyperkit]
I1025 18:41:39.781775    1672 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate220630889/001/docker-machine-driver-hyperkit]
I1025 18:41:39.794266    1672 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate220630889/001/docker-machine-driver-hyperkit]
I1025 18:41:39.805612    1672 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate220630889/001/docker-machine-driver-hyperkit]
I1025 18:41:39.827115    1672 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 18:41:39.827246    1672 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1025 18:41:41.642010    1672 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1025 18:41:41.642031    1672 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1025 18:41:41.642095    1672 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1025 18:41:41.642129    1672 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate220630889/002/docker-machine-driver-hyperkit
I1025 18:41:42.032362    1672 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate220630889/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x104d0e6e0 0x104d0e6e0 0x104d0e6e0 0x104d0e6e0 0x104d0e6e0 0x104d0e6e0 0x104d0e6e0] Decompressors:map[bz2:0x14000598468 gz:0x14000598520 tar:0x140005984c0 tar.bz2:0x140005984d0 tar.gz:0x140005984e0 tar.xz:0x140005984f0 tar.zst:0x14000598500 tbz2:0x140005984d0 tgz:0x140005984e0 txz:0x140005984f0 tzst:0x14000598500 xz:0x14000598528 zip:0x14000598530 zst:0x14000598550] Getters:map[file:0x140008581f0 http:0x1400089b270 https:0x1400089b2c0] Dir:false ProgressListener:<nil> Insecure:false DisableSym
links:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1025 18:41:42.032481    1672 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate220630889/002/docker-machine-driver-hyperkit
I1025 18:41:45.056301    1672 install.go:79] stdout: 
W1025 18:41:45.056510    1672 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate220630889/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate220630889/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1025 18:41:45.056536    1672 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate220630889/002/docker-machine-driver-hyperkit]
I1025 18:41:45.074285    1672 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate220630889/002/docker-machine-driver-hyperkit]
I1025 18:41:45.088534    1672 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate220630889/002/docker-machine-driver-hyperkit]
I1025 18:41:45.100234    1672 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate220630889/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-390000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.310711041s)

                                                
                                                
-- stdout --
	* [force-systemd-env-390000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-390000" primary control-plane node in "force-systemd-env-390000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-390000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:41:38.583411    4414 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:41:38.583567    4414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:41:38.583570    4414 out.go:358] Setting ErrFile to fd 2...
	I1025 18:41:38.583573    4414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:41:38.583688    4414 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:41:38.584829    4414 out.go:352] Setting JSON to false
	I1025 18:41:38.602551    4414 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4269,"bootTime":1729902629,"procs":558,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:41:38.602621    4414 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:41:38.608623    4414 out.go:177] * [force-systemd-env-390000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:41:38.616538    4414 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:41:38.616574    4414 notify.go:220] Checking for updates...
	I1025 18:41:38.622477    4414 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:41:38.625516    4414 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:41:38.628540    4414 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:41:38.631532    4414 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:41:38.634547    4414 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1025 18:41:38.637917    4414 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:41:38.637972    4414 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:41:38.642459    4414 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:41:38.649538    4414 start.go:297] selected driver: qemu2
	I1025 18:41:38.649545    4414 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:41:38.649551    4414 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:41:38.652048    4414 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:41:38.655534    4414 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:41:38.658586    4414 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 18:41:38.658604    4414 cni.go:84] Creating CNI manager for ""
	I1025 18:41:38.658645    4414 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:41:38.658649    4414 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 18:41:38.658688    4414 start.go:340] cluster config:
	{Name:force-systemd-env-390000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:41:38.663314    4414 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:41:38.671551    4414 out.go:177] * Starting "force-systemd-env-390000" primary control-plane node in "force-systemd-env-390000" cluster
	I1025 18:41:38.675518    4414 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:41:38.675536    4414 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:41:38.675550    4414 cache.go:56] Caching tarball of preloaded images
	I1025 18:41:38.675647    4414 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:41:38.675653    4414 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:41:38.675722    4414 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/force-systemd-env-390000/config.json ...
	I1025 18:41:38.675734    4414 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/force-systemd-env-390000/config.json: {Name:mkaaa05400aab39d6e800ad8fd907753b5e372bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:41:38.676116    4414 start.go:360] acquireMachinesLock for force-systemd-env-390000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:41:38.676168    4414 start.go:364] duration metric: took 45.375µs to acquireMachinesLock for "force-systemd-env-390000"
	I1025 18:41:38.676181    4414 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-390000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:41:38.676213    4414 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:41:38.684518    4414 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 18:41:38.702893    4414 start.go:159] libmachine.API.Create for "force-systemd-env-390000" (driver="qemu2")
	I1025 18:41:38.702928    4414 client.go:168] LocalClient.Create starting
	I1025 18:41:38.702996    4414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:41:38.703040    4414 main.go:141] libmachine: Decoding PEM data...
	I1025 18:41:38.703052    4414 main.go:141] libmachine: Parsing certificate...
	I1025 18:41:38.703090    4414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:41:38.703119    4414 main.go:141] libmachine: Decoding PEM data...
	I1025 18:41:38.703128    4414 main.go:141] libmachine: Parsing certificate...
	I1025 18:41:38.703577    4414 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:41:38.860534    4414 main.go:141] libmachine: Creating SSH key...
	I1025 18:41:39.033317    4414 main.go:141] libmachine: Creating Disk image...
	I1025 18:41:39.033328    4414 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:41:39.033526    4414 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-env-390000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-env-390000/disk.qcow2
	I1025 18:41:39.043545    4414 main.go:141] libmachine: STDOUT: 
	I1025 18:41:39.043564    4414 main.go:141] libmachine: STDERR: 
	I1025 18:41:39.043626    4414 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-env-390000/disk.qcow2 +20000M
	I1025 18:41:39.052220    4414 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:41:39.052235    4414 main.go:141] libmachine: STDERR: 
	I1025 18:41:39.052256    4414 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-env-390000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-env-390000/disk.qcow2
	I1025 18:41:39.052262    4414 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:41:39.052274    4414 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:41:39.052310    4414 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-env-390000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-env-390000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-env-390000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:3d:2e:16:20:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-env-390000/disk.qcow2
	I1025 18:41:39.054088    4414 main.go:141] libmachine: STDOUT: 
	I1025 18:41:39.054110    4414 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:41:39.054128    4414 client.go:171] duration metric: took 351.201708ms to LocalClient.Create
	I1025 18:41:41.056155    4414 start.go:128] duration metric: took 2.379984583s to createHost
	I1025 18:41:41.056172    4414 start.go:83] releasing machines lock for "force-systemd-env-390000", held for 2.380049209s
	W1025 18:41:41.056192    4414 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:41:41.069681    4414 out.go:177] * Deleting "force-systemd-env-390000" in qemu2 ...
	W1025 18:41:41.077943    4414 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:41:41.077956    4414 start.go:729] Will try again in 5 seconds ...
	I1025 18:41:46.080085    4414 start.go:360] acquireMachinesLock for force-systemd-env-390000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:41:47.474166    4414 start.go:364] duration metric: took 1.393929125s to acquireMachinesLock for "force-systemd-env-390000"
	I1025 18:41:47.474263    4414 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-390000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-390000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:41:47.474514    4414 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:41:47.488306    4414 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1025 18:41:47.536446    4414 start.go:159] libmachine.API.Create for "force-systemd-env-390000" (driver="qemu2")
	I1025 18:41:47.536504    4414 client.go:168] LocalClient.Create starting
	I1025 18:41:47.536675    4414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:41:47.536749    4414 main.go:141] libmachine: Decoding PEM data...
	I1025 18:41:47.536767    4414 main.go:141] libmachine: Parsing certificate...
	I1025 18:41:47.536829    4414 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:41:47.536886    4414 main.go:141] libmachine: Decoding PEM data...
	I1025 18:41:47.536897    4414 main.go:141] libmachine: Parsing certificate...
	I1025 18:41:47.537520    4414 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:41:47.704434    4414 main.go:141] libmachine: Creating SSH key...
	I1025 18:41:47.787436    4414 main.go:141] libmachine: Creating Disk image...
	I1025 18:41:47.787442    4414 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:41:47.787629    4414 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-env-390000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-env-390000/disk.qcow2
	I1025 18:41:47.797958    4414 main.go:141] libmachine: STDOUT: 
	I1025 18:41:47.797973    4414 main.go:141] libmachine: STDERR: 
	I1025 18:41:47.798025    4414 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-env-390000/disk.qcow2 +20000M
	I1025 18:41:47.806510    4414 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:41:47.806534    4414 main.go:141] libmachine: STDERR: 
	I1025 18:41:47.806550    4414 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-env-390000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-env-390000/disk.qcow2
	I1025 18:41:47.806556    4414 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:41:47.806563    4414 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:41:47.806607    4414 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-env-390000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-env-390000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-env-390000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:5f:85:9a:90:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/force-systemd-env-390000/disk.qcow2
	I1025 18:41:47.808465    4414 main.go:141] libmachine: STDOUT: 
	I1025 18:41:47.808480    4414 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:41:47.808493    4414 client.go:171] duration metric: took 271.987833ms to LocalClient.Create
	I1025 18:41:49.810685    4414 start.go:128] duration metric: took 2.336177917s to createHost
	I1025 18:41:49.810765    4414 start.go:83] releasing machines lock for "force-systemd-env-390000", held for 2.336607s
	W1025 18:41:49.811235    4414 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-390000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-390000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:41:49.817108    4414 out.go:201] 
	W1025 18:41:49.834147    4414 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:41:49.834188    4414 out.go:270] * 
	* 
	W1025 18:41:49.837218    4414 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:41:49.846937    4414 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-390000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-390000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-390000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.250708ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-390000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-390000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-390000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-10-25 18:41:49.9421 -0700 PDT m=+3550.549622459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-390000 -n force-systemd-env-390000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-390000 -n force-systemd-env-390000: exit status 7 (37.647125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-390000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-390000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-390000
--- FAIL: TestForceSystemdEnv (11.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (35.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-701000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-701000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-sxmwl" [58f892c6-f7c6-46a7-b5e4-2c39538682b6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-sxmwl" [58f892c6-f7c6-46a7-b5e4-2c39538682b6] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.00514225s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:32055
functional_test.go:1661: error fetching http://192.168.105.4:32055: Get "http://192.168.105.4:32055": dial tcp 192.168.105.4:32055: connect: connection refused
I1025 17:53:17.280661    1672 retry.go:31] will retry after 876.905311ms: Get "http://192.168.105.4:32055": dial tcp 192.168.105.4:32055: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32055: Get "http://192.168.105.4:32055": dial tcp 192.168.105.4:32055: connect: connection refused
I1025 17:53:18.161151    1672 retry.go:31] will retry after 1.542185274s: Get "http://192.168.105.4:32055": dial tcp 192.168.105.4:32055: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32055: Get "http://192.168.105.4:32055": dial tcp 192.168.105.4:32055: connect: connection refused
I1025 17:53:19.706723    1672 retry.go:31] will retry after 2.563938992s: Get "http://192.168.105.4:32055": dial tcp 192.168.105.4:32055: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32055: Get "http://192.168.105.4:32055": dial tcp 192.168.105.4:32055: connect: connection refused
I1025 17:53:22.274614    1672 retry.go:31] will retry after 3.194511629s: Get "http://192.168.105.4:32055": dial tcp 192.168.105.4:32055: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32055: Get "http://192.168.105.4:32055": dial tcp 192.168.105.4:32055: connect: connection refused
I1025 17:53:25.473032    1672 retry.go:31] will retry after 4.983335905s: Get "http://192.168.105.4:32055": dial tcp 192.168.105.4:32055: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32055: Get "http://192.168.105.4:32055": dial tcp 192.168.105.4:32055: connect: connection refused
I1025 17:53:30.460007    1672 retry.go:31] will retry after 10.913739241s: Get "http://192.168.105.4:32055": dial tcp 192.168.105.4:32055: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:32055: Get "http://192.168.105.4:32055": dial tcp 192.168.105.4:32055: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:32055: Get "http://192.168.105.4:32055": dial tcp 192.168.105.4:32055: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-701000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-sxmwl
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-701000/192.168.105.4
Start Time:       Fri, 25 Oct 2024 17:53:07 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://4be41f62461c84fca41f331acd8cb713c06c21345b1ae0fadaf12e9ef8fb2856
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Fri, 25 Oct 2024 17:53:27 -0700
Finished:     Fri, 25 Oct 2024 17:53:27 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-svz8m (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-svz8m:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  34s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-sxmwl to functional-701000
Normal   Pulling    34s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     30s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.505s (3.505s including waiting). Image size: 84957542 bytes.
Normal   Created    14s (x3 over 30s)  kubelet            Created container echoserver-arm
Normal   Started    14s (x3 over 30s)  kubelet            Started container echoserver-arm
Normal   Pulled     14s (x2 over 30s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    1s (x4 over 29s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-sxmwl_default(58f892c6-f7c6-46a7-b5e4-2c39538682b6)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-701000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-701000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.56.111
IPs:                      10.97.56.111
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32055/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-701000 -n functional-701000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount     | -p functional-701000                                                                                                 | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1690203057/001:/mount-9p      |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-701000 ssh findmnt                                                                                        | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT | 25 Oct 24 17:53 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-701000 ssh -- ls                                                                                          | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT | 25 Oct 24 17:53 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-701000 ssh cat                                                                                            | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT | 25 Oct 24 17:53 PDT |
	|           | /mount-9p/test-1729904007990167000                                                                                   |                   |         |         |                     |                     |
	| ssh       | functional-701000 ssh stat                                                                                           | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT | 25 Oct 24 17:53 PDT |
	|           | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh       | functional-701000 ssh stat                                                                                           | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT | 25 Oct 24 17:53 PDT |
	|           | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-701000 ssh sudo                                                                                           | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT | 25 Oct 24 17:53 PDT |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-701000 ssh findmnt                                                                                        | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT |                     |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-701000                                                                                                 | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3730951591/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh       | functional-701000 ssh findmnt                                                                                        | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT | 25 Oct 24 17:53 PDT |
	|           | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-701000 ssh -- ls                                                                                          | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT | 25 Oct 24 17:53 PDT |
	|           | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh       | functional-701000 ssh sudo                                                                                           | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT |                     |
	|           | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount     | -p functional-701000                                                                                                 | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4221520768/001:/mount2   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-701000                                                                                                 | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4221520768/001:/mount3   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount     | -p functional-701000                                                                                                 | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4221520768/001:/mount1   |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh       | functional-701000 ssh findmnt                                                                                        | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-701000 ssh findmnt                                                                                        | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT |                     |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-701000 ssh findmnt                                                                                        | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT | 25 Oct 24 17:53 PDT |
	|           | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-701000 ssh findmnt                                                                                        | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT | 25 Oct 24 17:53 PDT |
	|           | -T /mount2                                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-701000 ssh findmnt                                                                                        | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT | 25 Oct 24 17:53 PDT |
	|           | -T /mount3                                                                                                           |                   |         |         |                     |                     |
	| mount     | -p functional-701000                                                                                                 | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT |                     |
	|           | --kill=true                                                                                                          |                   |         |         |                     |                     |
	| start     | -p functional-701000                                                                                                 | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-701000                                                                                                 | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT |                     |
	|           | --dry-run --memory                                                                                                   |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                              |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| start     | -p functional-701000 --dry-run                                                                                       | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                       |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                   | functional-701000 | jenkins | v1.34.0 | 25 Oct 24 17:53 PDT |                     |
	|           | -p functional-701000                                                                                                 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	|-----------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/25 17:53:36
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 17:53:36.774037    2292 out.go:345] Setting OutFile to fd 1 ...
	I1025 17:53:36.774207    2292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 17:53:36.774211    2292 out.go:358] Setting ErrFile to fd 2...
	I1025 17:53:36.774213    2292 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 17:53:36.774335    2292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 17:53:36.775444    2292 out.go:352] Setting JSON to false
	I1025 17:53:36.793569    2292 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1387,"bootTime":1729902629,"procs":550,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 17:53:36.793682    2292 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 17:53:36.796604    2292 out.go:177] * [functional-701000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 17:53:36.803625    2292 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 17:53:36.803694    2292 notify.go:220] Checking for updates...
	I1025 17:53:36.810591    2292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 17:53:36.813564    2292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 17:53:36.816492    2292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 17:53:36.819572    2292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 17:53:36.822604    2292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 17:53:36.825748    2292 config.go:182] Loaded profile config "functional-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 17:53:36.826012    2292 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 17:53:36.830557    2292 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 17:53:36.837581    2292 start.go:297] selected driver: qemu2
	I1025 17:53:36.837591    2292 start.go:901] validating driver "qemu2" against &{Name:functional-701000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-701000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 17:53:36.837727    2292 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 17:53:36.840155    2292 cni.go:84] Creating CNI manager for ""
	I1025 17:53:36.840187    2292 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 17:53:36.840226    2292 start.go:340] cluster config:
	{Name:functional-701000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-701000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 17:53:36.851530    2292 out.go:177] * dry-run validation complete!
	
	
	==> Docker <==
	Oct 26 00:53:33 functional-701000 dockerd[5688]: time="2024-10-26T00:53:33.951906075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 26 00:53:33 functional-701000 dockerd[5688]: time="2024-10-26T00:53:33.951939409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 26 00:53:33 functional-701000 dockerd[5688]: time="2024-10-26T00:53:33.951949326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 26 00:53:33 functional-701000 dockerd[5688]: time="2024-10-26T00:53:33.951983202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 26 00:53:33 functional-701000 dockerd[5688]: time="2024-10-26T00:53:33.983880035Z" level=info msg="shim disconnected" id=400f532312c61a20832e17ec17eba75ddb6b032a771523e89b268731a6cd26bf namespace=moby
	Oct 26 00:53:33 functional-701000 dockerd[5682]: time="2024-10-26T00:53:33.983898285Z" level=info msg="ignoring event" container=400f532312c61a20832e17ec17eba75ddb6b032a771523e89b268731a6cd26bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 26 00:53:33 functional-701000 dockerd[5688]: time="2024-10-26T00:53:33.984057165Z" level=warning msg="cleaning up after shim disconnected" id=400f532312c61a20832e17ec17eba75ddb6b032a771523e89b268731a6cd26bf namespace=moby
	Oct 26 00:53:33 functional-701000 dockerd[5688]: time="2024-10-26T00:53:33.984067123Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 26 00:53:37 functional-701000 dockerd[5688]: time="2024-10-26T00:53:37.790568501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 26 00:53:37 functional-701000 dockerd[5688]: time="2024-10-26T00:53:37.790615794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 26 00:53:37 functional-701000 dockerd[5688]: time="2024-10-26T00:53:37.790621378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 26 00:53:37 functional-701000 dockerd[5688]: time="2024-10-26T00:53:37.790832718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 26 00:53:37 functional-701000 dockerd[5688]: time="2024-10-26T00:53:37.797476751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 26 00:53:37 functional-701000 dockerd[5688]: time="2024-10-26T00:53:37.797520002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 26 00:53:37 functional-701000 dockerd[5688]: time="2024-10-26T00:53:37.797530752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 26 00:53:37 functional-701000 dockerd[5688]: time="2024-10-26T00:53:37.797792052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 26 00:53:37 functional-701000 cri-dockerd[5963]: time="2024-10-26T00:53:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cc553af1017a84bb1f4f4fd6811c7ad769f8f01712873375117b7a29f2d22b2f/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 26 00:53:37 functional-701000 cri-dockerd[5963]: time="2024-10-26T00:53:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/45191388ad98ec137d1b88bbc1e0107fd8b617ad9447c4ca43f4b9f734b03a6c/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Oct 26 00:53:38 functional-701000 dockerd[5682]: time="2024-10-26T00:53:38.069776367Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" spanID=4090286282f85340 traceID=55eec492d4c113140ccd20beb5581f77
	Oct 26 00:53:39 functional-701000 cri-dockerd[5963]: time="2024-10-26T00:53:39Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Oct 26 00:53:39 functional-701000 dockerd[5688]: time="2024-10-26T00:53:39.681491213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 26 00:53:39 functional-701000 dockerd[5688]: time="2024-10-26T00:53:39.681541381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 26 00:53:39 functional-701000 dockerd[5688]: time="2024-10-26T00:53:39.681551548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 26 00:53:39 functional-701000 dockerd[5688]: time="2024-10-26T00:53:39.681735095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 26 00:53:39 functional-701000 dockerd[5682]: time="2024-10-26T00:53:39.835845789Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" spanID=c3d65c3207a3ddaa traceID=1831f663ca473099b5c8f3273d80d744
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	3254aa058403b       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   2 seconds ago        Running             dashboard-metrics-scraper   0                   cc553af1017a8       dashboard-metrics-scraper-c5db448b4-zvnl9
	400f532312c61       72565bf5bbedf                                                                                          8 seconds ago        Exited              echoserver-arm              2                   bbd687b27f3a5       hello-node-64b4f8f9ff-wvcbj
	0ff13f92375f1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    11 seconds ago       Exited              mount-munger                0                   da2890b28fe03       busybox-mount
	4be41f62461c8       72565bf5bbedf                                                                                          14 seconds ago       Exited              echoserver-arm              2                   9626cf18542d7       hello-node-connect-65d86f57f4-sxmwl
	6555c1f4e2c85       nginx@sha256:28402db69fec7c17e179ea87882667f1e054391138f77ffaf0c3eb388efc3ffb                          28 seconds ago       Running             myfrontend                  0                   1081e0913decd       sp-pod
	20382e1a17c95       nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                          41 seconds ago       Running             nginx                       0                   1844ea2d31960       nginx-svc
	2746a77acaca9       2f6c962e7b831                                                                                          About a minute ago   Running             coredns                     2                   f7d74c39184d3       coredns-7c65d6cfc9-r5n5s
	83b827773a613       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         2                   edbd8addf77d7       storage-provisioner
	68731b8e14d5d       021d242013305                                                                                          About a minute ago   Running             kube-proxy                  2                   b545829d9b723       kube-proxy-zrjxk
	de4a14b5d1df4       27e3830e14027                                                                                          About a minute ago   Running             etcd                        2                   95b5b54952ae9       etcd-functional-701000
	a0d3a3f1d561c       9404aea098d9e                                                                                          About a minute ago   Running             kube-controller-manager     2                   fbcdb17b9a999       kube-controller-manager-functional-701000
	403185d0b99b3       d6b061e73ae45                                                                                          About a minute ago   Running             kube-scheduler              2                   2d9a6ee4acd42       kube-scheduler-functional-701000
	bd771a8fb7477       f9c26480f1e72                                                                                          About a minute ago   Running             kube-apiserver              0                   65162ffb61aa5       kube-apiserver-functional-701000
	d639ebe280251       2f6c962e7b831                                                                                          About a minute ago   Exited              coredns                     1                   832338da45b7f       coredns-7c65d6cfc9-r5n5s
	2789c038427dd       ba04bb24b9575                                                                                          About a minute ago   Exited              storage-provisioner         1                   cba21e7a38465       storage-provisioner
	af3412e6ba6bf       021d242013305                                                                                          About a minute ago   Exited              kube-proxy                  1                   b6ef73ed3b354       kube-proxy-zrjxk
	1e4a90868c86b       9404aea098d9e                                                                                          About a minute ago   Exited              kube-controller-manager     1                   448fad13f0ae3       kube-controller-manager-functional-701000
	63176e00c7bc4       27e3830e14027                                                                                          About a minute ago   Exited              etcd                        1                   db01f2e73c361       etcd-functional-701000
	6a2c28fa3ac4f       d6b061e73ae45                                                                                          About a minute ago   Exited              kube-scheduler              1                   002da40e460f8       kube-scheduler-functional-701000
	
	
	==> coredns [2746a77acaca] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54287 - 23847 "HINFO IN 6324257395230334122.1616349466150875637. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010557567s
	[INFO] 10.244.0.1:28832 - 47712 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000102545s
	[INFO] 10.244.0.1:54605 - 63150 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000099045s
	[INFO] 10.244.0.1:18787 - 20555 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000031751s
	[INFO] 10.244.0.1:7863 - 64927 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.00129471s
	[INFO] 10.244.0.1:28619 - 13060 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000113671s
	[INFO] 10.244.0.1:59539 - 25096 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000207965s
	
	
	==> coredns [d639ebe28025] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48176 - 43392 "HINFO IN 7946336888527406538.5551935890931525240. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011602924s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-701000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-701000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=functional-701000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_25T17_51_11_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 00:51:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-701000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 00:53:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 00:53:32 +0000   Sat, 26 Oct 2024 00:51:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 00:53:32 +0000   Sat, 26 Oct 2024 00:51:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 00:53:32 +0000   Sat, 26 Oct 2024 00:51:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 00:53:32 +0000   Sat, 26 Oct 2024 00:51:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-701000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 a9e4a02ba4594dd4956bdcbe06775117
	  System UUID:                a9e4a02ba4594dd4956bdcbe06775117
	  Boot ID:                    87e37adb-3139-4ffe-9f50-5eeddb1355e7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-wvcbj                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  default                     hello-node-connect-65d86f57f4-sxmwl          0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 coredns-7c65d6cfc9-r5n5s                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m26s
	  kube-system                 etcd-functional-701000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m32s
	  kube-system                 kube-apiserver-functional-701000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 kube-controller-manager-functional-701000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 kube-proxy-zrjxk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-scheduler-functional-701000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-zvnl9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-pl7dt        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m25s                  kube-proxy       
	  Normal  Starting                 69s                    kube-proxy       
	  Normal  Starting                 115s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m32s (x2 over 2m32s)  kubelet          Node functional-701000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m32s (x2 over 2m32s)  kubelet          Node functional-701000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m32s (x2 over 2m32s)  kubelet          Node functional-701000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m32s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m28s                  kubelet          Node functional-701000 status is now: NodeReady
	  Normal  RegisteredNode           2m27s                  node-controller  Node functional-701000 event: Registered Node functional-701000 in Controller
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)        kubelet          Node functional-701000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)        kubelet          Node functional-701000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m                     kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m (x7 over 2m)        kubelet          Node functional-701000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m                     kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           114s                   node-controller  Node functional-701000 event: Registered Node functional-701000 in Controller
	  Normal  Starting                 74s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  74s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  73s (x8 over 74s)      kubelet          Node functional-701000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s (x8 over 74s)      kubelet          Node functional-701000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s (x7 over 74s)      kubelet          Node functional-701000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           67s                    node-controller  Node functional-701000 event: Registered Node functional-701000 in Controller
	
	
	==> dmesg <==
	[Oct26 00:52] systemd-fstab-generator[4784]: Ignoring "noauto" option for root device
	[  +0.054018] kauditd_printk_skb: 33 callbacks suppressed
	[ +11.733608] systemd-fstab-generator[5210]: Ignoring "noauto" option for root device
	[  +0.054427] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.103537] systemd-fstab-generator[5243]: Ignoring "noauto" option for root device
	[  +0.102250] systemd-fstab-generator[5256]: Ignoring "noauto" option for root device
	[  +0.098446] systemd-fstab-generator[5270]: Ignoring "noauto" option for root device
	[  +5.113806] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.409062] systemd-fstab-generator[5912]: Ignoring "noauto" option for root device
	[  +0.088839] systemd-fstab-generator[5924]: Ignoring "noauto" option for root device
	[  +0.078044] systemd-fstab-generator[5936]: Ignoring "noauto" option for root device
	[  +0.084340] systemd-fstab-generator[5951]: Ignoring "noauto" option for root device
	[  +0.224758] systemd-fstab-generator[6119]: Ignoring "noauto" option for root device
	[  +1.121409] systemd-fstab-generator[6244]: Ignoring "noauto" option for root device
	[  +1.074320] kauditd_printk_skb: 194 callbacks suppressed
	[  +5.386006] kauditd_printk_skb: 36 callbacks suppressed
	[ +12.032851] systemd-fstab-generator[7270]: Ignoring "noauto" option for root device
	[  +5.985729] kauditd_printk_skb: 28 callbacks suppressed
	[  +6.881432] kauditd_printk_skb: 19 callbacks suppressed
	[Oct26 00:53] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.176483] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.506631] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.254669] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.111317] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.373569] kauditd_printk_skb: 25 callbacks suppressed
	
	
	==> etcd [63176e00c7bc] <==
	{"level":"info","ts":"2024-10-26T00:51:45.059971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-26T00:51:45.060061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-10-26T00:51:45.060098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-10-26T00:51:45.060160Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-26T00:51:45.060243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-10-26T00:51:45.060277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-26T00:51:45.063213Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-26T00:51:45.063584Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-26T00:51:45.063820Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-26T00:51:45.062762Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-701000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-26T00:51:45.064218Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-26T00:51:45.067049Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-26T00:51:45.068105Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-26T00:51:45.068916Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-26T00:51:45.069497Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-10-26T00:52:14.754018Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-10-26T00:52:14.754049Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-701000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-10-26T00:52:14.754095Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-26T00:52:14.754142Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-26T00:52:14.765317Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-10-26T00:52:14.765340Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-10-26T00:52:14.766473Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-10-26T00:52:14.768060Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-26T00:52:14.768091Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-26T00:52:14.768095Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-701000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [de4a14b5d1df] <==
	{"level":"info","ts":"2024-10-26T00:52:30.033640Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-26T00:52:30.033679Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-26T00:52:30.033689Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-26T00:52:30.034101Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-26T00:52:30.034723Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-26T00:52:30.034775Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-26T00:52:30.034828Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-10-26T00:52:30.035375Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-26T00:52:30.039523Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-26T00:52:31.175578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-10-26T00:52:31.175736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-10-26T00:52:31.175802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-10-26T00:52:31.175897Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-10-26T00:52:31.176073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-10-26T00:52:31.176352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-10-26T00:52:31.176446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-10-26T00:52:31.181787Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-701000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-26T00:52:31.181876Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-26T00:52:31.182391Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-26T00:52:31.182520Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-26T00:52:31.182403Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-26T00:52:31.183822Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-26T00:52:31.184073Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-26T00:52:31.185488Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-26T00:52:31.185496Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> kernel <==
	 00:53:42 up 2 min,  0 users,  load average: 0.88, 0.60, 0.25
	Linux functional-701000 5.10.207 #1 SMP PREEMPT Tue Oct 15 16:10:02 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [bd771a8fb747] <==
	I1026 00:52:31.795320       1 aggregator.go:171] initial CRD sync complete...
	I1026 00:52:31.795329       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 00:52:31.795336       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 00:52:31.795368       1 cache.go:39] Caches are synced for autoregister controller
	I1026 00:52:31.796401       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1026 00:52:31.796418       1 policy_source.go:224] refreshing policies
	I1026 00:52:31.807612       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 00:52:32.679992       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 00:52:32.977085       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1026 00:52:32.981479       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1026 00:52:32.992414       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1026 00:52:32.999716       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 00:52:33.001832       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 00:52:35.248479       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 00:52:35.275778       1 controller.go:615] quota admission added evaluator for: endpoints
	I1026 00:52:51.754208       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.104.220"}
	I1026 00:52:56.758780       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.26.84"}
	I1026 00:53:07.178266       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1026 00:53:07.222473       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.56.111"}
	E1026 00:53:12.055457       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49663: use of closed network connection
	E1026 00:53:20.615214       1 conn.go:339] Error on socket receive: read tcp 192.168.105.4:8441->192.168.105.1:49673: use of closed network connection
	I1026 00:53:20.696833       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.53.61"}
	I1026 00:53:37.340478       1 controller.go:615] quota admission added evaluator for: namespaces
	I1026 00:53:37.421525       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.183.26"}
	I1026 00:53:37.430487       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.19.117"}
	
	
	==> kube-controller-manager [1e4a90868c86] <==
	I1026 00:51:48.943789       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1026 00:51:48.943825       1 shared_informer.go:320] Caches are synced for deployment
	I1026 00:51:48.944081       1 shared_informer.go:320] Caches are synced for cronjob
	I1026 00:51:48.944110       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1026 00:51:48.946636       1 shared_informer.go:320] Caches are synced for daemon sets
	I1026 00:51:48.948819       1 shared_informer.go:320] Caches are synced for node
	I1026 00:51:48.948842       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1026 00:51:48.948853       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 00:51:48.948856       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1026 00:51:48.948858       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1026 00:51:48.948884       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-701000"
	I1026 00:51:48.950795       1 shared_informer.go:320] Caches are synced for PVC protection
	I1026 00:51:48.950801       1 shared_informer.go:320] Caches are synced for endpoint
	I1026 00:51:48.951877       1 shared_informer.go:320] Caches are synced for resource quota
	I1026 00:51:48.954109       1 shared_informer.go:320] Caches are synced for taint
	I1026 00:51:48.954181       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 00:51:48.954207       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-701000"
	I1026 00:51:48.954273       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 00:51:49.041163       1 shared_informer.go:320] Caches are synced for resource quota
	I1026 00:51:49.044119       1 shared_informer.go:320] Caches are synced for PV protection
	I1026 00:51:49.094387       1 shared_informer.go:320] Caches are synced for persistent volume
	I1026 00:51:49.143152       1 shared_informer.go:320] Caches are synced for attach detach
	I1026 00:51:49.553739       1 shared_informer.go:320] Caches are synced for garbage collector
	I1026 00:51:49.553806       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 00:51:49.558958       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-controller-manager [a0d3a3f1d561] <==
	I1026 00:53:37.373415       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.950661ms"
	E1026 00:53:37.373434       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1026 00:53:37.380817       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="12.390997ms"
	E1026 00:53:37.380843       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1026 00:53:37.381022       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.717684ms"
	E1026 00:53:37.381033       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1026 00:53:37.391437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.905905ms"
	E1026 00:53:37.391492       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1026 00:53:37.400994       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.401544ms"
	E1026 00:53:37.401019       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1026 00:53:37.401046       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="12.967973ms"
	E1026 00:53:37.401068       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1026 00:53:37.408164       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.57568ms"
	E1026 00:53:37.408192       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1026 00:53:37.453474       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="10.48694ms"
	I1026 00:53:37.465636       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="12.133615ms"
	I1026 00:53:37.471740       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="22.215668ms"
	I1026 00:53:37.476783       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.896731ms"
	I1026 00:53:37.476890       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="25.709µs"
	I1026 00:53:37.481545       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="20.125µs"
	I1026 00:53:37.487540       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="21.88245ms"
	I1026 00:53:37.487665       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="19µs"
	I1026 00:53:39.918710       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="4.076704ms"
	I1026 00:53:39.918741       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="17.459µs"
	I1026 00:53:40.921366       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="50.876µs"
	
	
	==> kube-proxy [68731b8e14d5] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1026 00:52:32.469022       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1026 00:52:32.472512       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1026 00:52:32.472536       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 00:52:32.480060       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1026 00:52:32.480075       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 00:52:32.480086       1 server_linux.go:169] "Using iptables Proxier"
	I1026 00:52:32.480749       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 00:52:32.480865       1 server.go:483] "Version info" version="v1.31.2"
	I1026 00:52:32.480873       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 00:52:32.481336       1 config.go:199] "Starting service config controller"
	I1026 00:52:32.481350       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1026 00:52:32.481389       1 config.go:105] "Starting endpoint slice config controller"
	I1026 00:52:32.481395       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1026 00:52:32.481616       1 config.go:328] "Starting node config controller"
	I1026 00:52:32.481657       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1026 00:52:32.582284       1 shared_informer.go:320] Caches are synced for node config
	I1026 00:52:32.582301       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1026 00:52:32.582356       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [af3412e6ba6b] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1026 00:51:46.180452       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1026 00:51:46.184846       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E1026 00:51:46.184979       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 00:51:46.198027       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1026 00:51:46.198050       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1026 00:51:46.198064       1 server_linux.go:169] "Using iptables Proxier"
	I1026 00:51:46.198709       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 00:51:46.198801       1 server.go:483] "Version info" version="v1.31.2"
	I1026 00:51:46.198810       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 00:51:46.199237       1 config.go:199] "Starting service config controller"
	I1026 00:51:46.199252       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1026 00:51:46.199262       1 config.go:105] "Starting endpoint slice config controller"
	I1026 00:51:46.199267       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1026 00:51:46.199589       1 config.go:328] "Starting node config controller"
	I1026 00:51:46.199594       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1026 00:51:46.299552       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1026 00:51:46.299594       1 shared_informer.go:320] Caches are synced for service config
	I1026 00:51:46.299726       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [403185d0b99b] <==
	I1026 00:52:30.409833       1 serving.go:386] Generated self-signed cert in-memory
	W1026 00:52:31.703594       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 00:52:31.703609       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 00:52:31.703613       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 00:52:31.703617       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 00:52:31.727378       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1026 00:52:31.727433       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 00:52:31.728396       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1026 00:52:31.728459       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 00:52:31.728476       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 00:52:31.728513       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 00:52:31.828634       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [6a2c28fa3ac4] <==
	I1026 00:51:43.966723       1 serving.go:386] Generated self-signed cert in-memory
	W1026 00:51:45.599730       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 00:51:45.599826       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 00:51:45.599871       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 00:51:45.599890       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 00:51:45.621279       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1026 00:51:45.621386       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 00:51:45.625709       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 00:51:45.625726       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1026 00:51:45.625736       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 00:51:45.625761       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 00:51:45.728886       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 00:52:14.747448       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1026 00:52:14.747505       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1026 00:52:14.747607       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1026 00:52:14.747606       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 26 00:53:28 functional-701000 kubelet[6251]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 26 00:53:28 functional-701000 kubelet[6251]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 26 00:53:28 functional-701000 kubelet[6251]: I1026 00:53:28.988586    6251 scope.go:117] "RemoveContainer" containerID="659a506d95dbc86c51293048130b2b2b00f61de0f8ef587f7b8282a00451022c"
	Oct 26 00:53:29 functional-701000 kubelet[6251]: I1026 00:53:29.088720    6251 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e64d9070-5969-4f5f-91de-742f9e62e489-test-volume\") pod \"busybox-mount\" (UID: \"e64d9070-5969-4f5f-91de-742f9e62e489\") " pod="default/busybox-mount"
	Oct 26 00:53:29 functional-701000 kubelet[6251]: I1026 00:53:29.088742    6251 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trgfz\" (UniqueName: \"kubernetes.io/projected/e64d9070-5969-4f5f-91de-742f9e62e489-kube-api-access-trgfz\") pod \"busybox-mount\" (UID: \"e64d9070-5969-4f5f-91de-742f9e62e489\") " pod="default/busybox-mount"
	Oct 26 00:53:33 functional-701000 kubelet[6251]: I1026 00:53:33.032350    6251 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trgfz\" (UniqueName: \"kubernetes.io/projected/e64d9070-5969-4f5f-91de-742f9e62e489-kube-api-access-trgfz\") pod \"e64d9070-5969-4f5f-91de-742f9e62e489\" (UID: \"e64d9070-5969-4f5f-91de-742f9e62e489\") "
	Oct 26 00:53:33 functional-701000 kubelet[6251]: I1026 00:53:33.032387    6251 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e64d9070-5969-4f5f-91de-742f9e62e489-test-volume\") pod \"e64d9070-5969-4f5f-91de-742f9e62e489\" (UID: \"e64d9070-5969-4f5f-91de-742f9e62e489\") "
	Oct 26 00:53:33 functional-701000 kubelet[6251]: I1026 00:53:33.032443    6251 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e64d9070-5969-4f5f-91de-742f9e62e489-test-volume" (OuterVolumeSpecName: "test-volume") pod "e64d9070-5969-4f5f-91de-742f9e62e489" (UID: "e64d9070-5969-4f5f-91de-742f9e62e489"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Oct 26 00:53:33 functional-701000 kubelet[6251]: I1026 00:53:33.034104    6251 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e64d9070-5969-4f5f-91de-742f9e62e489-kube-api-access-trgfz" (OuterVolumeSpecName: "kube-api-access-trgfz") pod "e64d9070-5969-4f5f-91de-742f9e62e489" (UID: "e64d9070-5969-4f5f-91de-742f9e62e489"). InnerVolumeSpecName "kube-api-access-trgfz". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 26 00:53:33 functional-701000 kubelet[6251]: I1026 00:53:33.132766    6251 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-trgfz\" (UniqueName: \"kubernetes.io/projected/e64d9070-5969-4f5f-91de-742f9e62e489-kube-api-access-trgfz\") on node \"functional-701000\" DevicePath \"\""
	Oct 26 00:53:33 functional-701000 kubelet[6251]: I1026 00:53:33.132782    6251 reconciler_common.go:288] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/e64d9070-5969-4f5f-91de-742f9e62e489-test-volume\") on node \"functional-701000\" DevicePath \"\""
	Oct 26 00:53:33 functional-701000 kubelet[6251]: I1026 00:53:33.857040    6251 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da2890b28fe036dd2d3e22b17c70870759d40f158e90e419218e2205f6a366f0"
	Oct 26 00:53:33 functional-701000 kubelet[6251]: I1026 00:53:33.904402    6251 scope.go:117] "RemoveContainer" containerID="f8f429bc30a42d6122d890e73826d6ba13180a45e5fbf6efedc05a7fe371ede6"
	Oct 26 00:53:34 functional-701000 kubelet[6251]: I1026 00:53:34.869545    6251 scope.go:117] "RemoveContainer" containerID="f8f429bc30a42d6122d890e73826d6ba13180a45e5fbf6efedc05a7fe371ede6"
	Oct 26 00:53:34 functional-701000 kubelet[6251]: I1026 00:53:34.869915    6251 scope.go:117] "RemoveContainer" containerID="400f532312c61a20832e17ec17eba75ddb6b032a771523e89b268731a6cd26bf"
	Oct 26 00:53:34 functional-701000 kubelet[6251]: E1026 00:53:34.870036    6251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-wvcbj_default(c5e26f91-156d-4a4b-acc0-c85a94daf882)\"" pod="default/hello-node-64b4f8f9ff-wvcbj" podUID="c5e26f91-156d-4a4b-acc0-c85a94daf882"
	Oct 26 00:53:37 functional-701000 kubelet[6251]: E1026 00:53:37.447454    6251 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e64d9070-5969-4f5f-91de-742f9e62e489" containerName="mount-munger"
	Oct 26 00:53:37 functional-701000 kubelet[6251]: I1026 00:53:37.447532    6251 memory_manager.go:354] "RemoveStaleState removing state" podUID="e64d9070-5969-4f5f-91de-742f9e62e489" containerName="mount-munger"
	Oct 26 00:53:37 functional-701000 kubelet[6251]: I1026 00:53:37.571293    6251 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hpf5\" (UniqueName: \"kubernetes.io/projected/fdb89da3-d232-412c-8876-b5e58dcc743e-kube-api-access-9hpf5\") pod \"dashboard-metrics-scraper-c5db448b4-zvnl9\" (UID: \"fdb89da3-d232-412c-8876-b5e58dcc743e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-zvnl9"
	Oct 26 00:53:37 functional-701000 kubelet[6251]: I1026 00:53:37.571324    6251 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ace1a3da-96f9-440e-94a2-7aec1603ab75-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-pl7dt\" (UID: \"ace1a3da-96f9-440e-94a2-7aec1603ab75\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-pl7dt"
	Oct 26 00:53:37 functional-701000 kubelet[6251]: I1026 00:53:37.571338    6251 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lds2\" (UniqueName: \"kubernetes.io/projected/ace1a3da-96f9-440e-94a2-7aec1603ab75-kube-api-access-8lds2\") pod \"kubernetes-dashboard-695b96c756-pl7dt\" (UID: \"ace1a3da-96f9-440e-94a2-7aec1603ab75\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-pl7dt"
	Oct 26 00:53:37 functional-701000 kubelet[6251]: I1026 00:53:37.571352    6251 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fdb89da3-d232-412c-8876-b5e58dcc743e-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-zvnl9\" (UID: \"fdb89da3-d232-412c-8876-b5e58dcc743e\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-zvnl9"
	Oct 26 00:53:39 functional-701000 kubelet[6251]: I1026 00:53:39.915136    6251 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-zvnl9" podStartSLOduration=1.172418191 podStartE2EDuration="2.915125639s" podCreationTimestamp="2024-10-26 00:53:37 +0000 UTC" firstStartedPulling="2024-10-26 00:53:37.877118187 +0000 UTC m=+69.036018431" lastFinishedPulling="2024-10-26 00:53:39.619825635 +0000 UTC m=+70.778725879" observedRunningTime="2024-10-26 00:53:39.914919258 +0000 UTC m=+71.073819460" watchObservedRunningTime="2024-10-26 00:53:39.915125639 +0000 UTC m=+71.074025841"
	Oct 26 00:53:40 functional-701000 kubelet[6251]: I1026 00:53:40.905305    6251 scope.go:117] "RemoveContainer" containerID="4be41f62461c84fca41f331acd8cb713c06c21345b1ae0fadaf12e9ef8fb2856"
	Oct 26 00:53:40 functional-701000 kubelet[6251]: E1026 00:53:40.905479    6251 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-sxmwl_default(58f892c6-f7c6-46a7-b5e4-2c39538682b6)\"" pod="default/hello-node-connect-65d86f57f4-sxmwl" podUID="58f892c6-f7c6-46a7-b5e4-2c39538682b6"
	
	
	==> storage-provisioner [2789c038427d] <==
	I1026 00:51:46.103208       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 00:51:46.120909       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 00:51:46.120938       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 00:51:46.127305       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 00:51:46.127652       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-701000_9df101de-a63f-4484-accc-12730023c126!
	I1026 00:51:46.128200       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"35a652c7-faf1-41d0-ac55-77f4a7616062", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-701000_9df101de-a63f-4484-accc-12730023c126 became leader
	I1026 00:51:46.228671       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-701000_9df101de-a63f-4484-accc-12730023c126!
	
	
	==> storage-provisioner [83b827773a61] <==
	I1026 00:52:32.441493       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 00:52:32.450251       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 00:52:32.450688       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 00:52:49.865636       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 00:52:49.866452       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-701000_51669467-5bd7-4847-86bf-7bcad5d4c7d7!
	I1026 00:52:49.868523       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"35a652c7-faf1-41d0-ac55-77f4a7616062", APIVersion:"v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-701000_51669467-5bd7-4847-86bf-7bcad5d4c7d7 became leader
	I1026 00:52:49.968059       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-701000_51669467-5bd7-4847-86bf-7bcad5d4c7d7!
	I1026 00:53:00.866282       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1026 00:53:00.866347       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    30bf7b8a-f4f9-4d5f-bc4f-2ef57a4e18e5 305 0 2024-10-26 00:51:16 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-10-26 00:51:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-a170f57b-a61a-4ff0-b595-1d72e6db41d4 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  a170f57b-a61a-4ff0-b595-1d72e6db41d4 647 0 2024-10-26 00:53:00 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-10-26 00:53:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-10-26 00:53:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1026 00:53:00.866725       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-a170f57b-a61a-4ff0-b595-1d72e6db41d4" provisioned
	I1026 00:53:00.866743       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1026 00:53:00.866751       1 volume_store.go:212] Trying to save persistentvolume "pvc-a170f57b-a61a-4ff0-b595-1d72e6db41d4"
	I1026 00:53:00.867235       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"a170f57b-a61a-4ff0-b595-1d72e6db41d4", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1026 00:53:00.870967       1 volume_store.go:219] persistentvolume "pvc-a170f57b-a61a-4ff0-b595-1d72e6db41d4" saved
	I1026 00:53:00.872555       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"a170f57b-a61a-4ff0-b595-1d72e6db41d4", APIVersion:"v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-a170f57b-a61a-4ff0-b595-1d72e6db41d4
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-701000 -n functional-701000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-701000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount kubernetes-dashboard-695b96c756-pl7dt
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-701000 describe pod busybox-mount kubernetes-dashboard-695b96c756-pl7dt
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-701000 describe pod busybox-mount kubernetes-dashboard-695b96c756-pl7dt: exit status 1 (41.932333ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-701000/192.168.105.4
	Start Time:       Fri, 25 Oct 2024 17:53:28 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://0ff13f92375f175e3bdd24ab3f3bba3e3e2845dd5c28ec218a733693f5eeaa2b
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 25 Oct 2024 17:53:30 -0700
	      Finished:     Fri, 25 Oct 2024 17:53:30 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-trgfz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-trgfz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  13s   default-scheduler  Successfully assigned default/busybox-mount to functional-701000
	  Normal  Pulling    13s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     12s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.409s (1.409s including waiting). Image size: 3547125 bytes.
	  Normal  Created    12s   kubelet            Created container mount-munger
	  Normal  Started    12s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-695b96c756-pl7dt" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-701000 describe pod busybox-mount kubernetes-dashboard-695b96c756-pl7dt: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (35.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (725.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-499000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E1025 17:54:08.214377    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:56:24.324053    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:56:52.056400    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:57:55.645929    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:57:55.653653    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:57:55.667052    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:57:55.690440    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:57:55.733864    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:57:55.817299    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:57:55.980805    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:57:56.304288    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:57:56.947843    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:57:58.231542    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:58:00.795416    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:58:05.919140    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:58:16.162849    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:58:36.646381    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:59:17.609661    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:00:39.532508    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:01:24.346372    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:02:55.671983    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:03:23.404371    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-499000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 52 (12m5.30286275s)

                                                
                                                
-- stdout --
	* [ha-499000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-499000" primary control-plane node in "ha-499000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Deleting "ha-499000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 17:53:49.565661    2463 out.go:345] Setting OutFile to fd 1 ...
	I1025 17:53:49.565820    2463 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 17:53:49.565823    2463 out.go:358] Setting ErrFile to fd 2...
	I1025 17:53:49.565826    2463 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 17:53:49.565964    2463 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 17:53:49.567144    2463 out.go:352] Setting JSON to false
	I1025 17:53:49.586384    2463 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1400,"bootTime":1729902629,"procs":566,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 17:53:49.586472    2463 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 17:53:49.590683    2463 out.go:177] * [ha-499000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 17:53:49.597603    2463 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 17:53:49.597629    2463 notify.go:220] Checking for updates...
	I1025 17:53:49.604535    2463 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 17:53:49.607656    2463 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 17:53:49.610669    2463 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 17:53:49.611634    2463 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 17:53:49.614617    2463 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 17:53:49.617904    2463 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 17:53:49.621492    2463 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 17:53:49.627636    2463 start.go:297] selected driver: qemu2
	I1025 17:53:49.627641    2463 start.go:901] validating driver "qemu2" against <nil>
	I1025 17:53:49.627652    2463 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 17:53:49.630642    2463 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 17:53:49.633536    2463 out.go:177] * Automatically selected the socket_vmnet network
	I1025 17:53:49.637730    2463 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 17:53:49.637755    2463 cni.go:84] Creating CNI manager for ""
	I1025 17:53:49.637790    2463 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1025 17:53:49.637795    2463 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 17:53:49.637822    2463 start.go:340] cluster config:
	{Name:ha-499000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-499000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 17:53:49.642730    2463 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 17:53:49.649660    2463 out.go:177] * Starting "ha-499000" primary control-plane node in "ha-499000" cluster
	I1025 17:53:49.653668    2463 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 17:53:49.653682    2463 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 17:53:49.653694    2463 cache.go:56] Caching tarball of preloaded images
	I1025 17:53:49.653768    2463 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 17:53:49.653774    2463 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 17:53:49.653970    2463 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/ha-499000/config.json ...
	I1025 17:53:49.653981    2463 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/ha-499000/config.json: {Name:mk4d51ea29d6cb338974cb6820573e5669305235 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:53:49.654271    2463 start.go:360] acquireMachinesLock for ha-499000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 17:53:49.654323    2463 start.go:364] duration metric: took 45.875µs to acquireMachinesLock for "ha-499000"
	I1025 17:53:49.654336    2463 start.go:93] Provisioning new machine with config: &{Name:ha-499000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-499000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 17:53:49.654375    2463 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 17:53:49.660683    2463 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 17:53:49.684423    2463 start.go:159] libmachine.API.Create for "ha-499000" (driver="qemu2")
	I1025 17:53:49.684458    2463 client.go:168] LocalClient.Create starting
	I1025 17:53:49.684557    2463 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 17:53:49.684603    2463 main.go:141] libmachine: Decoding PEM data...
	I1025 17:53:49.684614    2463 main.go:141] libmachine: Parsing certificate...
	I1025 17:53:49.684659    2463 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 17:53:49.684692    2463 main.go:141] libmachine: Decoding PEM data...
	I1025 17:53:49.684702    2463 main.go:141] libmachine: Parsing certificate...
	I1025 17:53:49.685043    2463 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 17:53:49.906062    2463 main.go:141] libmachine: Creating SSH key...
	I1025 17:53:49.951042    2463 main.go:141] libmachine: Creating Disk image...
	I1025 17:53:49.951050    2463 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 17:53:49.951216    2463 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/disk.qcow2
	I1025 17:53:49.962986    2463 main.go:141] libmachine: STDOUT: 
	I1025 17:53:49.963005    2463 main.go:141] libmachine: STDERR: 
	I1025 17:53:49.963060    2463 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/disk.qcow2 +20000M
	I1025 17:53:49.971472    2463 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 17:53:49.971488    2463 main.go:141] libmachine: STDERR: 
	I1025 17:53:49.971511    2463 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/disk.qcow2
	I1025 17:53:49.971515    2463 main.go:141] libmachine: Starting QEMU VM...
	I1025 17:53:49.971526    2463 qemu.go:418] Using hvf for hardware acceleration
	I1025 17:53:49.971551    2463 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:86:82:23:b8:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/disk.qcow2
	I1025 17:53:50.010729    2463 main.go:141] libmachine: STDOUT: 
	I1025 17:53:50.010755    2463 main.go:141] libmachine: STDERR: 
	I1025 17:53:50.010759    2463 main.go:141] libmachine: Attempt 0
	I1025 17:53:50.010789    2463 main.go:141] libmachine: Searching for de:86:82:23:b8:09 in /var/db/dhcpd_leases ...
	I1025 17:53:50.010889    2463 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1025 17:53:50.010905    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7e:89:ce:23:fb:f1 ID:1,7e:89:ce:23:fb:f1 Lease:0x671c4afa}
	I1025 17:53:50.010916    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7e:e7:5b:c2:07:87 ID:1,7e:e7:5b:c2:7:87 Lease:0x671c3ca8}
	I1025 17:53:50.010923    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:7c:0f:17:27:49 ID:1,3e:7c:f:17:27:49 Lease:0x671c3c81}
	I1025 17:53:50.010930    2463 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671c4648}
	I1025 17:53:52.013077    2463 main.go:141] libmachine: Attempt 1
	I1025 17:53:52.013248    2463 main.go:141] libmachine: Searching for de:86:82:23:b8:09 in /var/db/dhcpd_leases ...
	I1025 17:53:52.013745    2463 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1025 17:53:52.013801    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7e:89:ce:23:fb:f1 ID:1,7e:89:ce:23:fb:f1 Lease:0x671c4afa}
	I1025 17:53:52.013839    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7e:e7:5b:c2:07:87 ID:1,7e:e7:5b:c2:7:87 Lease:0x671c3ca8}
	I1025 17:53:52.013873    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:7c:0f:17:27:49 ID:1,3e:7c:f:17:27:49 Lease:0x671c3c81}
	I1025 17:53:52.013902    2463 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671c4648}
	I1025 17:53:54.016141    2463 main.go:141] libmachine: Attempt 2
	I1025 17:53:54.016244    2463 main.go:141] libmachine: Searching for de:86:82:23:b8:09 in /var/db/dhcpd_leases ...
	I1025 17:53:54.016645    2463 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1025 17:53:54.016702    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7e:89:ce:23:fb:f1 ID:1,7e:89:ce:23:fb:f1 Lease:0x671c4afa}
	I1025 17:53:54.016736    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7e:e7:5b:c2:07:87 ID:1,7e:e7:5b:c2:7:87 Lease:0x671c3ca8}
	I1025 17:53:54.016765    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:7c:0f:17:27:49 ID:1,3e:7c:f:17:27:49 Lease:0x671c3c81}
	I1025 17:53:54.016797    2463 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671c4648}
	I1025 17:53:56.018976    2463 main.go:141] libmachine: Attempt 3
	I1025 17:53:56.019034    2463 main.go:141] libmachine: Searching for de:86:82:23:b8:09 in /var/db/dhcpd_leases ...
	I1025 17:53:56.019189    2463 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1025 17:53:56.019203    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7e:89:ce:23:fb:f1 ID:1,7e:89:ce:23:fb:f1 Lease:0x671c4afa}
	I1025 17:53:56.019211    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7e:e7:5b:c2:07:87 ID:1,7e:e7:5b:c2:7:87 Lease:0x671c3ca8}
	I1025 17:53:56.019218    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:7c:0f:17:27:49 ID:1,3e:7c:f:17:27:49 Lease:0x671c3c81}
	I1025 17:53:56.019225    2463 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671c4648}
	I1025 17:53:58.021251    2463 main.go:141] libmachine: Attempt 4
	I1025 17:53:58.021268    2463 main.go:141] libmachine: Searching for de:86:82:23:b8:09 in /var/db/dhcpd_leases ...
	I1025 17:53:58.021315    2463 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1025 17:53:58.021322    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7e:89:ce:23:fb:f1 ID:1,7e:89:ce:23:fb:f1 Lease:0x671c4afa}
	I1025 17:53:58.021328    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7e:e7:5b:c2:07:87 ID:1,7e:e7:5b:c2:7:87 Lease:0x671c3ca8}
	I1025 17:53:58.021333    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:7c:0f:17:27:49 ID:1,3e:7c:f:17:27:49 Lease:0x671c3c81}
	I1025 17:53:58.021338    2463 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671c4648}
	I1025 17:54:00.023365    2463 main.go:141] libmachine: Attempt 5
	I1025 17:54:00.023380    2463 main.go:141] libmachine: Searching for de:86:82:23:b8:09 in /var/db/dhcpd_leases ...
	I1025 17:54:00.023456    2463 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1025 17:54:00.023465    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7e:89:ce:23:fb:f1 ID:1,7e:89:ce:23:fb:f1 Lease:0x671c4afa}
	I1025 17:54:00.023472    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7e:e7:5b:c2:07:87 ID:1,7e:e7:5b:c2:7:87 Lease:0x671c3ca8}
	I1025 17:54:00.023478    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:7c:0f:17:27:49 ID:1,3e:7c:f:17:27:49 Lease:0x671c3c81}
	I1025 17:54:00.023483    2463 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671c4648}
	I1025 17:54:02.025539    2463 main.go:141] libmachine: Attempt 6
	I1025 17:54:02.025580    2463 main.go:141] libmachine: Searching for de:86:82:23:b8:09 in /var/db/dhcpd_leases ...
	I1025 17:54:02.025657    2463 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1025 17:54:02.025670    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7e:89:ce:23:fb:f1 ID:1,7e:89:ce:23:fb:f1 Lease:0x671c4afa}
	I1025 17:54:02.025676    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7e:e7:5b:c2:07:87 ID:1,7e:e7:5b:c2:7:87 Lease:0x671c3ca8}
	I1025 17:54:02.025681    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:7c:0f:17:27:49 ID:1,3e:7c:f:17:27:49 Lease:0x671c3c81}
	I1025 17:54:02.025687    2463 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671c4648}
	I1025 17:54:04.027745    2463 main.go:141] libmachine: Attempt 7
	I1025 17:54:04.027801    2463 main.go:141] libmachine: Searching for de:86:82:23:b8:09 in /var/db/dhcpd_leases ...
	I1025 17:54:04.027909    2463 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1025 17:54:04.027923    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:86:82:23:b8:09 ID:1,de:86:82:23:b8:9 Lease:0x671c4bba}
	I1025 17:54:04.027927    2463 main.go:141] libmachine: Found match: de:86:82:23:b8:09
	I1025 17:54:04.027935    2463 main.go:141] libmachine: IP: 192.168.105.5
	I1025 17:54:04.027940    2463 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I1025 17:59:49.683114    2463 start.go:128] duration metric: took 6m0.032985667s to createHost
	I1025 17:59:49.683181    2463 start.go:83] releasing machines lock for "ha-499000", held for 6m0.033145125s
	W1025 17:59:49.683225    2463 start.go:714] error starting host: creating host: create host timed out in 360.000000 seconds
	I1025 17:59:49.693366    2463 out.go:177] * Deleting "ha-499000" in qemu2 ...
	W1025 17:59:49.727746    2463 out.go:270] ! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
	I1025 17:59:49.727784    2463 start.go:729] Will try again in 5 seconds ...
	I1025 17:59:54.729945    2463 start.go:360] acquireMachinesLock for ha-499000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 17:59:54.730533    2463 start.go:364] duration metric: took 449.375µs to acquireMachinesLock for "ha-499000"
	I1025 17:59:54.730679    2463 start.go:93] Provisioning new machine with config: &{Name:ha-499000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-499000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 17:59:54.730959    2463 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 17:59:54.736681    2463 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 17:59:54.786913    2463 start.go:159] libmachine.API.Create for "ha-499000" (driver="qemu2")
	I1025 17:59:54.786959    2463 client.go:168] LocalClient.Create starting
	I1025 17:59:54.787121    2463 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 17:59:54.787205    2463 main.go:141] libmachine: Decoding PEM data...
	I1025 17:59:54.787228    2463 main.go:141] libmachine: Parsing certificate...
	I1025 17:59:54.787308    2463 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 17:59:54.787367    2463 main.go:141] libmachine: Decoding PEM data...
	I1025 17:59:54.787385    2463 main.go:141] libmachine: Parsing certificate...
	I1025 17:59:54.788070    2463 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 17:59:54.955627    2463 main.go:141] libmachine: Creating SSH key...
	I1025 17:59:55.055114    2463 main.go:141] libmachine: Creating Disk image...
	I1025 17:59:55.055121    2463 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 17:59:55.055312    2463 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/disk.qcow2
	I1025 17:59:55.065090    2463 main.go:141] libmachine: STDOUT: 
	I1025 17:59:55.065108    2463 main.go:141] libmachine: STDERR: 
	I1025 17:59:55.065179    2463 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/disk.qcow2 +20000M
	I1025 17:59:55.073534    2463 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 17:59:55.073548    2463 main.go:141] libmachine: STDERR: 
	I1025 17:59:55.073562    2463 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/disk.qcow2
	I1025 17:59:55.073567    2463 main.go:141] libmachine: Starting QEMU VM...
	I1025 17:59:55.073574    2463 qemu.go:418] Using hvf for hardware acceleration
	I1025 17:59:55.073614    2463 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:6f:b5:3c:8a:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/disk.qcow2
	I1025 17:59:55.110034    2463 main.go:141] libmachine: STDOUT: 
	I1025 17:59:55.110059    2463 main.go:141] libmachine: STDERR: 
	I1025 17:59:55.110064    2463 main.go:141] libmachine: Attempt 0
	I1025 17:59:55.110088    2463 main.go:141] libmachine: Searching for d6:6f:b5:3c:8a:d0 in /var/db/dhcpd_leases ...
	I1025 17:59:55.110223    2463 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1025 17:59:55.110235    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:86:82:23:b8:09 ID:1,de:86:82:23:b8:9 Lease:0x671c4bba}
	I1025 17:59:55.110244    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7e:89:ce:23:fb:f1 ID:1,7e:89:ce:23:fb:f1 Lease:0x671c4afa}
	I1025 17:59:55.110269    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7e:e7:5b:c2:07:87 ID:1,7e:e7:5b:c2:7:87 Lease:0x671c3ca8}
	I1025 17:59:55.110275    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:7c:0f:17:27:49 ID:1,3e:7c:f:17:27:49 Lease:0x671c3c81}
	I1025 17:59:55.110282    2463 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671c4648}
	I1025 17:59:57.112417    2463 main.go:141] libmachine: Attempt 1
	I1025 17:59:57.112536    2463 main.go:141] libmachine: Searching for d6:6f:b5:3c:8a:d0 in /var/db/dhcpd_leases ...
	I1025 17:59:57.113023    2463 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1025 17:59:57.113080    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:86:82:23:b8:09 ID:1,de:86:82:23:b8:9 Lease:0x671c4bba}
	I1025 17:59:57.113142    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7e:89:ce:23:fb:f1 ID:1,7e:89:ce:23:fb:f1 Lease:0x671c4afa}
	I1025 17:59:57.113177    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7e:e7:5b:c2:07:87 ID:1,7e:e7:5b:c2:7:87 Lease:0x671c3ca8}
	I1025 17:59:57.113208    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:7c:0f:17:27:49 ID:1,3e:7c:f:17:27:49 Lease:0x671c3c81}
	I1025 17:59:57.113235    2463 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671c4648}
	I1025 17:59:59.115439    2463 main.go:141] libmachine: Attempt 2
	I1025 17:59:59.115600    2463 main.go:141] libmachine: Searching for d6:6f:b5:3c:8a:d0 in /var/db/dhcpd_leases ...
	I1025 17:59:59.116088    2463 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1025 17:59:59.116143    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:86:82:23:b8:09 ID:1,de:86:82:23:b8:9 Lease:0x671c4bba}
	I1025 17:59:59.116175    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7e:89:ce:23:fb:f1 ID:1,7e:89:ce:23:fb:f1 Lease:0x671c4afa}
	I1025 17:59:59.116204    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7e:e7:5b:c2:07:87 ID:1,7e:e7:5b:c2:7:87 Lease:0x671c3ca8}
	I1025 17:59:59.116233    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:7c:0f:17:27:49 ID:1,3e:7c:f:17:27:49 Lease:0x671c3c81}
	I1025 17:59:59.116263    2463 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671c4648}
	I1025 18:00:01.118445    2463 main.go:141] libmachine: Attempt 3
	I1025 18:00:01.118503    2463 main.go:141] libmachine: Searching for d6:6f:b5:3c:8a:d0 in /var/db/dhcpd_leases ...
	I1025 18:00:01.118591    2463 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1025 18:00:01.118605    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:86:82:23:b8:09 ID:1,de:86:82:23:b8:9 Lease:0x671c4bba}
	I1025 18:00:01.118615    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7e:89:ce:23:fb:f1 ID:1,7e:89:ce:23:fb:f1 Lease:0x671c4afa}
	I1025 18:00:01.118620    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7e:e7:5b:c2:07:87 ID:1,7e:e7:5b:c2:7:87 Lease:0x671c3ca8}
	I1025 18:00:01.118636    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:7c:0f:17:27:49 ID:1,3e:7c:f:17:27:49 Lease:0x671c3c81}
	I1025 18:00:01.118646    2463 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671c4648}
	I1025 18:00:03.120674    2463 main.go:141] libmachine: Attempt 4
	I1025 18:00:03.120691    2463 main.go:141] libmachine: Searching for d6:6f:b5:3c:8a:d0 in /var/db/dhcpd_leases ...
	I1025 18:00:03.120774    2463 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1025 18:00:03.120783    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:86:82:23:b8:09 ID:1,de:86:82:23:b8:9 Lease:0x671c4bba}
	I1025 18:00:03.120789    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7e:89:ce:23:fb:f1 ID:1,7e:89:ce:23:fb:f1 Lease:0x671c4afa}
	I1025 18:00:03.120794    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7e:e7:5b:c2:07:87 ID:1,7e:e7:5b:c2:7:87 Lease:0x671c3ca8}
	I1025 18:00:03.120804    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:7c:0f:17:27:49 ID:1,3e:7c:f:17:27:49 Lease:0x671c3c81}
	I1025 18:00:03.120809    2463 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671c4648}
	I1025 18:00:05.122843    2463 main.go:141] libmachine: Attempt 5
	I1025 18:00:05.122869    2463 main.go:141] libmachine: Searching for d6:6f:b5:3c:8a:d0 in /var/db/dhcpd_leases ...
	I1025 18:00:05.122928    2463 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1025 18:00:05.122938    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:86:82:23:b8:09 ID:1,de:86:82:23:b8:9 Lease:0x671c4bba}
	I1025 18:00:05.122943    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7e:89:ce:23:fb:f1 ID:1,7e:89:ce:23:fb:f1 Lease:0x671c4afa}
	I1025 18:00:05.122954    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7e:e7:5b:c2:07:87 ID:1,7e:e7:5b:c2:7:87 Lease:0x671c3ca8}
	I1025 18:00:05.122959    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:7c:0f:17:27:49 ID:1,3e:7c:f:17:27:49 Lease:0x671c3c81}
	I1025 18:00:05.122964    2463 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671c4648}
	I1025 18:00:07.125002    2463 main.go:141] libmachine: Attempt 6
	I1025 18:00:07.125022    2463 main.go:141] libmachine: Searching for d6:6f:b5:3c:8a:d0 in /var/db/dhcpd_leases ...
	I1025 18:00:07.125124    2463 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1025 18:00:07.125134    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:de:86:82:23:b8:09 ID:1,de:86:82:23:b8:9 Lease:0x671c4bba}
	I1025 18:00:07.125139    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:7e:89:ce:23:fb:f1 ID:1,7e:89:ce:23:fb:f1 Lease:0x671c4afa}
	I1025 18:00:07.125145    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:7e:e7:5b:c2:07:87 ID:1,7e:e7:5b:c2:7:87 Lease:0x671c3ca8}
	I1025 18:00:07.125150    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:3e:7c:0f:17:27:49 ID:1,3e:7c:f:17:27:49 Lease:0x671c3c81}
	I1025 18:00:07.125156    2463 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x671c4648}
	I1025 18:00:09.127226    2463 main.go:141] libmachine: Attempt 7
	I1025 18:00:09.127284    2463 main.go:141] libmachine: Searching for d6:6f:b5:3c:8a:d0 in /var/db/dhcpd_leases ...
	I1025 18:00:09.127428    2463 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I1025 18:00:09.127441    2463 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:d6:6f:b5:3c:8a:d0 ID:1,d6:6f:b5:3c:8a:d0 Lease:0x671c4d27}
	I1025 18:00:09.127444    2463 main.go:141] libmachine: Found match: d6:6f:b5:3c:8a:d0
	I1025 18:00:09.127455    2463 main.go:141] libmachine: IP: 192.168.105.6
	I1025 18:00:09.127460    2463 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I1025 18:05:54.815512    2463 start.go:128] duration metric: took 6m0.058247458s to createHost
	I1025 18:05:54.815589    2463 start.go:83] releasing machines lock for "ha-499000", held for 6m0.05876475s
	W1025 18:05:54.815838    2463 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-499000" may fix it: creating host: create host timed out in 360.000000 seconds
	* Failed to start qemu2 VM. Running "minikube delete -p ha-499000" may fix it: creating host: create host timed out in 360.000000 seconds
	I1025 18:05:54.823409    2463 out.go:201] 
	W1025 18:05:54.827468    2463 out.go:270] X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 seconds
	X Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 seconds
	W1025 18:05:54.827535    2463 out.go:270] * Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	* Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
	W1025 18:05:54.827614    2463 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/7072
	* Related issue: https://github.com/kubernetes/minikube/issues/7072
	I1025 18:05:54.845408    2463 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-499000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 52
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000: exit status 7 (71.988167ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:05:54.931367    2927 status.go:393] failed to get driver ip: parsing IP: 
	E1025 18:05:54.931373    2927 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-499000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StartCluster (725.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (113.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-499000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-499000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (63.826875ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-499000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-499000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-499000 -- rollout status deployment/busybox: exit status 1 (62.110292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-499000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (61.712125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-499000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:05:55.120377    1672 retry.go:31] will retry after 1.069876761s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.838542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-499000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:05:56.301586    1672 retry.go:31] will retry after 1.496188412s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.119459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-499000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:05:57.910333    1672 retry.go:31] will retry after 3.353949967s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.696ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-499000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:06:01.375310    1672 retry.go:31] will retry after 4.359040938s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.227916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-499000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:06:05.844938    1672 retry.go:31] will retry after 3.717771134s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.096167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-499000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:06:09.675108    1672 retry.go:31] will retry after 10.034123192s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.734625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-499000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:06:19.820300    1672 retry.go:31] will retry after 14.184279069s: failed to retrieve Pod IPs (may be temporary): exit status 1
E1025 18:06:24.348714    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.293417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-499000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:06:34.115284    1672 retry.go:31] will retry after 18.881381788s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.939792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-499000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:06:53.106945    1672 retry.go:31] will retry after 20.488830573s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.551209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-499000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:07:13.705455    1672 retry.go:31] will retry after 34.197941001s: failed to retrieve Pod IPs (may be temporary): exit status 1
E1025 18:07:47.442925    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.683833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-499000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.82725ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-499000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-499000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-499000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.193708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-499000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-499000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-499000 -- exec  -- nslookup kubernetes.default: exit status 1 (62.300209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-499000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-499000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-499000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (61.554167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-499000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000: exit status 7 (35.124875ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:07:48.296772    3025 status.go:393] failed to get driver ip: parsing IP: 
	E1025 18:07:48.296777    3025 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-499000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DeployApp (113.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-499000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.688417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-499000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000: exit status 7 (35.025834ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:07:48.394789    3030 status.go:393] failed to get driver ip: parsing IP: 
	E1025 18:07:48.394795    3030 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-499000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-499000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-499000 -v=7 --alsologtostderr: exit status 50 (51.261708ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:07:48.428656    3032 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:07:48.428940    3032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:07:48.428943    3032 out.go:358] Setting ErrFile to fd 2...
	I1025 18:07:48.428945    3032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:07:48.429077    3032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:07:48.429324    3032 mustload.go:65] Loading cluster: ha-499000
	I1025 18:07:48.429551    3032 config.go:182] Loaded profile config "ha-499000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:07:48.430224    3032 host.go:66] Checking if "ha-499000" exists ...
	I1025 18:07:48.434286    3032 out.go:201] 
	W1025 18:07:48.438195    3032 out.go:270] X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-499000 endpoint: failed to lookup ip for ""
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-499000 endpoint: failed to lookup ip for ""
	W1025 18:07:48.438228    3032 out.go:270] * Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	I1025 18:07:48.443110    3032 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-499000 -v=7 --alsologtostderr" : exit status 50
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000: exit status 7 (35.192292ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:07:48.481474    3034 status.go:393] failed to get driver ip: parsing IP: 
	E1025 18:07:48.481480    3034 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-499000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-499000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-499000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.934875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-499000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-499000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-499000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000: exit status 7 (35.681833ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:07:48.544433    3037 status.go:393] failed to get driver ip: parsing IP: 
	E1025 18:07:48.544443    3037 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-499000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-499000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-499000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-499000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-499000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-499000" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-499000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-499000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-499000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000: exit status 7 (34.50325ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:07:48.634940    3042 status.go:393] failed to get driver ip: parsing IP: 
	E1025 18:07:48.634947    3042 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-499000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-499000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-499000 node stop m02 -v=7 --alsologtostderr: exit status 85 (51.546625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:07:48.704347    3046 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:07:48.704616    3046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:07:48.704619    3046 out.go:358] Setting ErrFile to fd 2...
	I1025 18:07:48.704621    3046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:07:48.704763    3046 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:07:48.705025    3046 mustload.go:65] Loading cluster: ha-499000
	I1025 18:07:48.705233    3046 config.go:182] Loaded profile config "ha-499000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:07:48.709235    3046 out.go:201] 
	W1025 18:07:48.712298    3046 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1025 18:07:48.712303    3046 out.go:270] * 
	* 
	W1025 18:07:48.713772    3046 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:07:48.718052    3046 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-499000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-499000 status -v=7 --alsologtostderr
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-499000 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-499000 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-499000 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-499000 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000: exit status 7 (35.164666ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:07:48.792012    3050 status.go:393] failed to get driver ip: parsing IP: 
	E1025 18:07:48.792018    3050 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-499000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-499000" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-499000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-499000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-499000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000: exit status 7 (34.486458ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:07:48.880793    3055 status.go:393] failed to get driver ip: parsing IP: 
	E1025 18:07:48.880800    3055 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-499000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (0.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-499000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-499000 node start m02 -v=7 --alsologtostderr: exit status 85 (48.129ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:07:48.914512    3057 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:07:48.914787    3057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:07:48.914791    3057 out.go:358] Setting ErrFile to fd 2...
	I1025 18:07:48.914794    3057 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:07:48.914925    3057 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:07:48.915170    3057 mustload.go:65] Loading cluster: ha-499000
	I1025 18:07:48.915371    3057 config.go:182] Loaded profile config "ha-499000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:07:48.919306    3057 out.go:201] 
	W1025 18:07:48.920470    3057 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1025 18:07:48.920475    3057 out.go:270] * 
	* 
	W1025 18:07:48.921830    3057 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:07:48.925224    3057 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1025 18:07:48.914512    3057 out.go:345] Setting OutFile to fd 1 ...
I1025 18:07:48.914787    3057 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 18:07:48.914791    3057 out.go:358] Setting ErrFile to fd 2...
I1025 18:07:48.914794    3057 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 18:07:48.914925    3057 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
I1025 18:07:48.915170    3057 mustload.go:65] Loading cluster: ha-499000
I1025 18:07:48.915371    3057 config.go:182] Loaded profile config "ha-499000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1025 18:07:48.919306    3057 out.go:201] 
W1025 18:07:48.920470    3057 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1025 18:07:48.920475    3057 out.go:270] * 
* 
W1025 18:07:48.921830    3057 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1025 18:07:48.925224    3057 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-499000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-499000 status -v=7 --alsologtostderr
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-499000 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-darwin-arm64 -p ha-499000 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-darwin-arm64 -p ha-499000 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-darwin-arm64 -p ha-499000 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
ha_test.go:450: (dbg) Non-zero exit: kubectl get nodes: exit status 1 (34.470875ms)

                                                
                                                
** stderr ** 
	E1025 18:07:48.996129    3061 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1025 18:07:48.996691    3061 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1025 18:07:48.997826    3061 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1025 18:07:48.998216    3061 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	E1025 18:07:48.999590    3061 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused
	The connection to the server localhost:8080 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
ha_test.go:452: failed to kubectl get nodes. args "kubectl get nodes" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000: exit status 7 (35.442ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:07:49.034952    3062 status.go:393] failed to get driver ip: parsing IP: 
	E1025 18:07:49.034958    3062 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-499000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (0.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-499000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-499000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-499000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-499000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-499000" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-499000\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-499000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-499000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000: exit status 7 (34.734209ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:07:49.121303    3067 status.go:393] failed to get driver ip: parsing IP: 
	E1025 18:07:49.121309    3067 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-499000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (960.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-499000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-499000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-499000 -v=7 --alsologtostderr: (5.144654042s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-499000 --wait=true -v=7 --alsologtostderr
E1025 18:07:55.669500    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:11:24.345989    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:12:55.667252    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:14:18.763029    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:16:24.266928    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:17:55.587902    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:21:24.261600    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:22:55.581652    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-499000 --wait=true -v=7 --alsologtostderr: signal: killed (15m55.253352958s)

                                                
                                                
-- stdout --
	* [ha-499000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-499000" primary control-plane node in "ha-499000" cluster
	* Restarting existing qemu2 VM for "ha-499000" ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:07:54.369656    3088 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:07:54.369837    3088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:07:54.369841    3088 out.go:358] Setting ErrFile to fd 2...
	I1025 18:07:54.369844    3088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:07:54.370016    3088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:07:54.371289    3088 out.go:352] Setting JSON to false
	I1025 18:07:54.391643    3088 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2245,"bootTime":1729902629,"procs":558,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:07:54.391712    3088 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:07:54.396391    3088 out.go:177] * [ha-499000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:07:54.403277    3088 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:07:54.403318    3088 notify.go:220] Checking for updates...
	I1025 18:07:54.410241    3088 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:07:54.414314    3088 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:07:54.417292    3088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:07:54.421261    3088 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:07:54.424280    3088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:07:54.428380    3088 config.go:182] Loaded profile config "ha-499000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:07:54.428431    3088 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:07:54.433314    3088 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 18:07:54.440135    3088 start.go:297] selected driver: qemu2
	I1025 18:07:54.440140    3088 start.go:901] validating driver "qemu2" against &{Name:ha-499000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-499000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:07:54.440199    3088 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:07:54.442644    3088 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:07:54.442667    3088 cni.go:84] Creating CNI manager for ""
	I1025 18:07:54.442690    3088 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1025 18:07:54.442727    3088 start.go:340] cluster config:
	{Name:ha-499000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-499000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:07:54.447253    3088 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:07:54.455283    3088 out.go:177] * Starting "ha-499000" primary control-plane node in "ha-499000" cluster
	I1025 18:07:54.459273    3088 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:07:54.459288    3088 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:07:54.459301    3088 cache.go:56] Caching tarball of preloaded images
	I1025 18:07:54.459373    3088 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:07:54.459378    3088 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:07:54.459435    3088 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/ha-499000/config.json ...
	I1025 18:07:54.459842    3088 start.go:360] acquireMachinesLock for ha-499000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:07:54.459887    3088 start.go:364] duration metric: took 39.708µs to acquireMachinesLock for "ha-499000"
	I1025 18:07:54.459895    3088 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:07:54.459899    3088 fix.go:54] fixHost starting: 
	I1025 18:07:54.460016    3088 fix.go:112] recreateIfNeeded on ha-499000: state=Stopped err=<nil>
	W1025 18:07:54.460024    3088 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:07:54.468216    3088 out.go:177] * Restarting existing qemu2 VM for "ha-499000" ...
	I1025 18:07:54.472228    3088 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:07:54.472262    3088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:6f:b5:3c:8a:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/ha-499000/disk.qcow2
	I1025 18:07:54.511678    3088 main.go:141] libmachine: STDOUT: 
	I1025 18:07:54.511721    3088 main.go:141] libmachine: STDERR: 
	I1025 18:07:54.511725    3088 main.go:141] libmachine: Attempt 0
	I1025 18:07:54.511746    3088 main.go:141] libmachine: Searching for d6:6f:b5:3c:8a:d0 in /var/db/dhcpd_leases ...
	I1025 18:07:54.511820    3088 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I1025 18:07:54.511835    3088 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:d6:6f:b5:3c:8a:d0 ID:1,d6:6f:b5:3c:8a:d0 Lease:0x671c40e7}
	I1025 18:07:54.511839    3088 main.go:141] libmachine: Found match: d6:6f:b5:3c:8a:d0
	I1025 18:07:54.511846    3088 main.go:141] libmachine: IP: 192.168.105.6
	I1025 18:07:54.511851    3088 main.go:141] libmachine: Waiting for VM to start (ssh -p 0 docker@192.168.105.6)...

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-499000 -v=7 --alsologtostderr" : signal: killed
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-499000
ha_test.go:474: (dbg) Non-zero exit: out/minikube-darwin-arm64 node list -p ha-499000: context deadline exceeded (625ns)
ha_test.go:476: failed to run node list. args "out/minikube-darwin-arm64 node list -p ha-499000" : context deadline exceeded
ha_test.go:481: reported node list is not the same after restart. Before restart: ha-499000	

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-499000 -n ha-499000: exit status 7 (39.201959ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 18:23:49.504375    3348 status.go:393] failed to get driver ip: parsing IP: 
	E1025 18:23:49.504384    3348 status.go:119] status error: parsing IP: 

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-499000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (960.47s)

                                                
                                    
x
+
TestJSONOutput/start/Command (725.26s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-346000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E1025 18:24:27.356755    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:26:24.255804    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:27:55.575884    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:30:58.671866    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:31:24.249500    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:32:55.569227    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:36:24.243203    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-346000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 52 (12m5.259609834s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8434bf6f-25dc-4693-a50d-db49e85a0305","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-346000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e0013051-8484-4bbf-922d-2b0eba4736b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19868"}}
	{"specversion":"1.0","id":"d648b620-22ac-4433-a73d-8479c4e465aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig"}}
	{"specversion":"1.0","id":"0925cbe5-313f-4502-bc6b-46b1634a43a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"9b6f8592-644f-447e-9b04-7621fbcc4794","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a4cb147b-eb7e-4797-80e2-b3326808c59f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube"}}
	{"specversion":"1.0","id":"cdf28f59-881d-47a0-992f-b31fccfc01b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"258f732a-8cfd-43c3-9198-f600a44868bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"11b8e4f7-b668-4180-814f-5b887b710f91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"5578ecc6-29fd-4114-8d04-c42ac38942a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-346000\" primary control-plane node in \"json-output-346000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"51f004d4-ec83-4d6d-87ba-874838e016d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"57ffeb21-c7e0-4b31-9bd6-e363ad11070c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-346000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"f4548af7-b56d-44c4-9e4b-1f16b66a7cc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"}}
	{"specversion":"1.0","id":"3dacd5c8-f208-4c73-bb8c-e584644bab14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"9986f4a7-5157-4971-95a8-2281c4423c67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-346000\" may fix it: creating host: create host timed out in 360.000000 seconds"}}
	{"specversion":"1.0","id":"899a8209-7313-497e-b443-2da444c4a663","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try 'minikube delete', and disable any conflicting VPN or firewall software","exitcode":"52","issues":"https://github.com/kubernetes/minikube/issues/7072","message":"Failed to start host: creating host: create host timed out in 360.000000 seconds","name":"DRV_CREATE_TIMEOUT","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-346000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 52
--- FAIL: TestJSONOutput/start/Command (725.26s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
json_output_test.go:114: step 9 has already been assigned to another step:
Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
Cannot use for:
Deleting "json-output-346000" in qemu2 ...
[Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 8434bf6f-25dc-4693-a50d-db49e85a0305
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-346000] minikube v1.34.0 on Darwin 15.0.1 (arm64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: e0013051-8484-4bbf-922d-2b0eba4736b5
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=19868"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: d648b620-22ac-4433-a73d-8479c4e465aa
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 0925cbe5-313f-4502-bc6b-46b1634a43a4
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-arm64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 9b6f8592-644f-447e-9b04-7621fbcc4794
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: a4cb147b-eb7e-4797-80e2-b3326808c59f
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: cdf28f59-881d-47a0-992f-b31fccfc01b4
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 258f732a-8cfd-43c3-9198-f600a44868bb
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the qemu2 driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 11b8e4f7-b668-4180-814f-5b887b710f91
datacontenttype: application/json
Data,
{
"message": "Automatically selected the socket_vmnet network"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 5578ecc6-29fd-4114-8d04-c42ac38942a2
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-346000\" primary control-plane node in \"json-output-346000\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 51f004d4-ec83-4d6d-87ba-874838e016d5
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 57ffeb21-c7e0-4b31-9bd6-e363ad11070c
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Deleting \"json-output-346000\" in qemu2 ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: f4548af7-b56d-44c4-9e4b-1f16b66a7cc3
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 3dacd5c8-f208-4c73-bb8c-e584644bab14
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 9986f4a7-5157-4971-95a8-2281c4423c67
datacontenttype: application/json
Data,
{
"message": "Failed to start qemu2 VM. Running \"minikube delete -p json-output-346000\" may fix it: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 899a8209-7313-497e-b443-2da444c4a663
datacontenttype: application/json
Data,
{
"advice": "Try 'minikube delete', and disable any conflicting VPN or firewall software",
"exitcode": "52",
"issues": "https://github.com/kubernetes/minikube/issues/7072",
"message": "Failed to start host: creating host: create host timed out in 360.000000 seconds",
"name": "DRV_CREATE_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
json_output_test.go:144: current step is not in increasing order: [Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 8434bf6f-25dc-4693-a50d-db49e85a0305
datacontenttype: application/json
Data,
{
"currentstep": "0",
"message": "[json-output-346000] minikube v1.34.0 on Darwin 15.0.1 (arm64)",
"name": "Initial Minikube Setup",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: e0013051-8484-4bbf-922d-2b0eba4736b5
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_LOCATION=19868"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: d648b620-22ac-4433-a73d-8479c4e465aa
datacontenttype: application/json
Data,
{
"message": "KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 0925cbe5-313f-4502-bc6b-46b1634a43a4
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_BIN=out/minikube-darwin-arm64"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 9b6f8592-644f-447e-9b04-7621fbcc4794
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: a4cb147b-eb7e-4797-80e2-b3326808c59f
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: cdf28f59-881d-47a0-992f-b31fccfc01b4
datacontenttype: application/json
Data,
{
"message": "MINIKUBE_FORCE_SYSTEMD="
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 258f732a-8cfd-43c3-9198-f600a44868bb
datacontenttype: application/json
Data,
{
"currentstep": "1",
"message": "Using the qemu2 driver based on user configuration",
"name": "Selecting Driver",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.info
source: https://minikube.sigs.k8s.io/
id: 11b8e4f7-b668-4180-814f-5b887b710f91
datacontenttype: application/json
Data,
{
"message": "Automatically selected the socket_vmnet network"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 5578ecc6-29fd-4114-8d04-c42ac38942a2
datacontenttype: application/json
Data,
{
"currentstep": "3",
"message": "Starting \"json-output-346000\" primary control-plane node in \"json-output-346000\" cluster",
"name": "Starting Node",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 51f004d4-ec83-4d6d-87ba-874838e016d5
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 57ffeb21-c7e0-4b31-9bd6-e363ad11070c
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Deleting \"json-output-346000\" in qemu2 ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: f4548af7-b56d-44c4-9e4b-1f16b66a7cc3
datacontenttype: application/json
Data,
{
"message": "StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.step
source: https://minikube.sigs.k8s.io/
id: 3dacd5c8-f208-4c73-bb8c-e584644bab14
datacontenttype: application/json
Data,
{
"currentstep": "9",
"message": "Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...",
"name": "Creating VM",
"totalsteps": "19"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 9986f4a7-5157-4971-95a8-2281c4423c67
datacontenttype: application/json
Data,
{
"message": "Failed to start qemu2 VM. Running \"minikube delete -p json-output-346000\" may fix it: creating host: create host timed out in 360.000000 seconds"
}
Context Attributes,
specversion: 1.0
type: io.k8s.sigs.minikube.error
source: https://minikube.sigs.k8s.io/
id: 899a8209-7313-497e-b443-2da444c4a663
datacontenttype: application/json
Data,
{
"advice": "Try 'minikube delete', and disable any conflicting VPN or firewall software",
"exitcode": "52",
"issues": "https://github.com/kubernetes/minikube/issues/7072",
"message": "Failed to start host: creating host: create host timed out in 360.000000 seconds",
"name": "DRV_CREATE_TIMEOUT",
"url": ""
}
]
--- FAIL: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-346000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-346000 --output=json --user=testUser: exit status 50 (86.806334ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c1dc76e6-8146-4ad6-ae28-c643ae3dfc7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Recreate the cluster by running:\n\t\tminikube delete {{.profileArg}}\n\t\tminikube start {{.profileArg}}","exitcode":"50","issues":"","message":"Unable to get control-plane node json-output-346000 endpoint: failed to lookup ip for \"\"","name":"DRV_CP_ENDPOINT","url":""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-346000 --output=json --user=testUser": exit status 50
--- FAIL: TestJSONOutput/pause/Command (0.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.06s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-346000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-346000 --output=json --user=testUser: exit status 50 (58.496334ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node json-output-346000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-346000 --output=json --user=testUser": exit status 50
--- FAIL: TestJSONOutput/unpause/Command (0.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-246000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
E1025 18:37:55.563554    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-246000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.010871084s)

                                                
                                                
-- stdout --
	* [mount-start-1-246000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-246000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-246000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-246000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-246000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-246000 -n mount-start-1-246000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-246000 -n mount-start-1-246000: exit status 7 (73.924791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-246000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.09s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-293000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-293000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.88940825s)

                                                
                                                
-- stdout --
	* [multinode-293000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-293000" primary control-plane node in "multinode-293000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-293000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:38:00.469794    3903 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:38:00.469951    3903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:38:00.469954    3903 out.go:358] Setting ErrFile to fd 2...
	I1025 18:38:00.469956    3903 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:38:00.470080    3903 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:38:00.471172    3903 out.go:352] Setting JSON to false
	I1025 18:38:00.488763    3903 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4051,"bootTime":1729902629,"procs":556,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:38:00.488843    3903 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:38:00.495384    3903 out.go:177] * [multinode-293000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:38:00.503422    3903 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:38:00.503460    3903 notify.go:220] Checking for updates...
	I1025 18:38:00.510358    3903 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:38:00.513338    3903 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:38:00.516303    3903 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:38:00.519344    3903 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:38:00.522350    3903 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:38:00.525563    3903 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:38:00.529293    3903 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:38:00.536337    3903 start.go:297] selected driver: qemu2
	I1025 18:38:00.536345    3903 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:38:00.536352    3903 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:38:00.538895    3903 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:38:00.542293    3903 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:38:00.545461    3903 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:38:00.545494    3903 cni.go:84] Creating CNI manager for ""
	I1025 18:38:00.545516    3903 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1025 18:38:00.545521    3903 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 18:38:00.545549    3903 start.go:340] cluster config:
	{Name:multinode-293000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:38:00.550059    3903 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:38:00.558342    3903 out.go:177] * Starting "multinode-293000" primary control-plane node in "multinode-293000" cluster
	I1025 18:38:00.562201    3903 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:38:00.562219    3903 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:38:00.562231    3903 cache.go:56] Caching tarball of preloaded images
	I1025 18:38:00.562326    3903 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:38:00.562333    3903 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:38:00.562555    3903 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/multinode-293000/config.json ...
	I1025 18:38:00.562569    3903 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/multinode-293000/config.json: {Name:mk33c299165bfdb606f6292820ee85d3b3247784 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:38:00.562951    3903 start.go:360] acquireMachinesLock for multinode-293000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:38:00.563004    3903 start.go:364] duration metric: took 44.667µs to acquireMachinesLock for "multinode-293000"
	I1025 18:38:00.563016    3903 start.go:93] Provisioning new machine with config: &{Name:multinode-293000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:38:00.563050    3903 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:38:00.571217    3903 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 18:38:00.589633    3903 start.go:159] libmachine.API.Create for "multinode-293000" (driver="qemu2")
	I1025 18:38:00.589667    3903 client.go:168] LocalClient.Create starting
	I1025 18:38:00.589747    3903 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:38:00.589784    3903 main.go:141] libmachine: Decoding PEM data...
	I1025 18:38:00.589798    3903 main.go:141] libmachine: Parsing certificate...
	I1025 18:38:00.589835    3903 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:38:00.589871    3903 main.go:141] libmachine: Decoding PEM data...
	I1025 18:38:00.589880    3903 main.go:141] libmachine: Parsing certificate...
	I1025 18:38:00.590324    3903 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:38:00.741311    3903 main.go:141] libmachine: Creating SSH key...
	I1025 18:38:00.871298    3903 main.go:141] libmachine: Creating Disk image...
	I1025 18:38:00.871305    3903 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:38:00.871512    3903 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/disk.qcow2
	I1025 18:38:00.881631    3903 main.go:141] libmachine: STDOUT: 
	I1025 18:38:00.881644    3903 main.go:141] libmachine: STDERR: 
	I1025 18:38:00.881712    3903 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/disk.qcow2 +20000M
	I1025 18:38:00.890076    3903 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:38:00.890096    3903 main.go:141] libmachine: STDERR: 
	I1025 18:38:00.890107    3903 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/disk.qcow2
	I1025 18:38:00.890113    3903 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:38:00.890126    3903 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:38:00.890158    3903 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:c8:de:33:1f:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/disk.qcow2
	I1025 18:38:00.891988    3903 main.go:141] libmachine: STDOUT: 
	I1025 18:38:00.892002    3903 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:38:00.892020    3903 client.go:171] duration metric: took 302.353459ms to LocalClient.Create
	I1025 18:38:02.894153    3903 start.go:128] duration metric: took 2.331133958s to createHost
	I1025 18:38:02.894211    3903 start.go:83] releasing machines lock for "multinode-293000", held for 2.331246084s
	W1025 18:38:02.894272    3903 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:38:02.909643    3903 out.go:177] * Deleting "multinode-293000" in qemu2 ...
	W1025 18:38:02.933497    3903 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:38:02.933522    3903 start.go:729] Will try again in 5 seconds ...
	I1025 18:38:07.935640    3903 start.go:360] acquireMachinesLock for multinode-293000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:38:07.936188    3903 start.go:364] duration metric: took 448.625µs to acquireMachinesLock for "multinode-293000"
	I1025 18:38:07.936313    3903 start.go:93] Provisioning new machine with config: &{Name:multinode-293000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:38:07.936620    3903 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:38:07.952281    3903 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 18:38:08.000349    3903 start.go:159] libmachine.API.Create for "multinode-293000" (driver="qemu2")
	I1025 18:38:08.000405    3903 client.go:168] LocalClient.Create starting
	I1025 18:38:08.000552    3903 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:38:08.000632    3903 main.go:141] libmachine: Decoding PEM data...
	I1025 18:38:08.000653    3903 main.go:141] libmachine: Parsing certificate...
	I1025 18:38:08.000713    3903 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:38:08.000770    3903 main.go:141] libmachine: Decoding PEM data...
	I1025 18:38:08.000786    3903 main.go:141] libmachine: Parsing certificate...
	I1025 18:38:08.001325    3903 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:38:08.167771    3903 main.go:141] libmachine: Creating SSH key...
	I1025 18:38:08.258718    3903 main.go:141] libmachine: Creating Disk image...
	I1025 18:38:08.258726    3903 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:38:08.258909    3903 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/disk.qcow2
	I1025 18:38:08.268685    3903 main.go:141] libmachine: STDOUT: 
	I1025 18:38:08.268709    3903 main.go:141] libmachine: STDERR: 
	I1025 18:38:08.268763    3903 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/disk.qcow2 +20000M
	I1025 18:38:08.277091    3903 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:38:08.277107    3903 main.go:141] libmachine: STDERR: 
	I1025 18:38:08.277125    3903 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/disk.qcow2
	I1025 18:38:08.277131    3903 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:38:08.277140    3903 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:38:08.277168    3903 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:91:a0:1a:fc:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/disk.qcow2
	I1025 18:38:08.278887    3903 main.go:141] libmachine: STDOUT: 
	I1025 18:38:08.278900    3903 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:38:08.278913    3903 client.go:171] duration metric: took 278.508333ms to LocalClient.Create
	I1025 18:38:10.281045    3903 start.go:128] duration metric: took 2.344444708s to createHost
	I1025 18:38:10.281099    3903 start.go:83] releasing machines lock for "multinode-293000", held for 2.344934708s
	W1025 18:38:10.281503    3903 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-293000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-293000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:38:10.295149    3903 out.go:201] 
	W1025 18:38:10.299187    3903 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:38:10.299229    3903 out.go:270] * 
	* 
	W1025 18:38:10.301942    3903 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:38:10.312112    3903 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-293000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000: exit status 7 (71.736166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (80.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-293000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-293000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (131.886792ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-293000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-293000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-293000 -- rollout status deployment/busybox: exit status 1 (62.177792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-293000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.536375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-293000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:38:10.657619    1672 retry.go:31] will retry after 1.161271179s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.480042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-293000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:38:11.928729    1672 retry.go:31] will retry after 1.190663177s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.322792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-293000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:38:13.230030    1672 retry.go:31] will retry after 3.013207471s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.376334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-293000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:38:16.355659    1672 retry.go:31] will retry after 3.980224254s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.153084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-293000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:38:20.446295    1672 retry.go:31] will retry after 7.397591654s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.421542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-293000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:38:27.954452    1672 retry.go:31] will retry after 6.453839759s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.142583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-293000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:38:34.518704    1672 retry.go:31] will retry after 8.889132465s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.031ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-293000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:38:43.518166    1672 retry.go:31] will retry after 23.156274148s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.876917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-293000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1025 18:39:06.784357    1672 retry.go:31] will retry after 23.39438704s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.486666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-293000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.972167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-293000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-293000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-293000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.196167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-293000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-293000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-293000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.448125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-293000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-293000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-293000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (61.035416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-293000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000: exit status 7 (34.822583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (80.17s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-293000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (60.252458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-293000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000: exit status 7 (33.83325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-293000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-293000 -v 3 --alsologtostderr: exit status 83 (46.992292ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-293000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-293000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:39:30.696479    3995 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:39:30.696909    3995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:30.696913    3995 out.go:358] Setting ErrFile to fd 2...
	I1025 18:39:30.696915    3995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:30.697093    3995 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:39:30.697338    3995 mustload.go:65] Loading cluster: multinode-293000
	I1025 18:39:30.697556    3995 config.go:182] Loaded profile config "multinode-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:39:30.701981    3995 out.go:177] * The control-plane node multinode-293000 host is not running: state=Stopped
	I1025 18:39:30.705852    3995 out.go:177]   To start a cluster, run: "minikube start -p multinode-293000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-293000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000: exit status 7 (33.746708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-293000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-293000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (32.496084ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-293000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-293000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-293000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000: exit status 7 (34.498834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-293000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-293000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-293000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"multinode-293000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000: exit status 7 (34.285083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-293000 status --output json --alsologtostderr: exit status 7 (33.898125ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-293000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:39:30.929645    4007 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:39:30.929820    4007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:30.929823    4007 out.go:358] Setting ErrFile to fd 2...
	I1025 18:39:30.929825    4007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:30.929961    4007 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:39:30.930090    4007 out.go:352] Setting JSON to true
	I1025 18:39:30.930103    4007 mustload.go:65] Loading cluster: multinode-293000
	I1025 18:39:30.930160    4007 notify.go:220] Checking for updates...
	I1025 18:39:30.930316    4007 config.go:182] Loaded profile config "multinode-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:39:30.930324    4007 status.go:174] checking status of multinode-293000 ...
	I1025 18:39:30.930562    4007 status.go:371] multinode-293000 host status = "Stopped" (err=<nil>)
	I1025 18:39:30.930566    4007 status.go:384] host is not running, skipping remaining checks
	I1025 18:39:30.930568    4007 status.go:176] multinode-293000 status: &{Name:multinode-293000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-293000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000: exit status 7 (34.500709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-293000 node stop m03: exit status 85 (51.073792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-293000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-293000 status: exit status 7 (33.690833ms)

                                                
                                                
-- stdout --
	multinode-293000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-293000 status --alsologtostderr: exit status 7 (34.149209ms)

                                                
                                                
-- stdout --
	multinode-293000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:39:31.083982    4015 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:39:31.084170    4015 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:31.084173    4015 out.go:358] Setting ErrFile to fd 2...
	I1025 18:39:31.084175    4015 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:31.084295    4015 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:39:31.084420    4015 out.go:352] Setting JSON to false
	I1025 18:39:31.084429    4015 mustload.go:65] Loading cluster: multinode-293000
	I1025 18:39:31.084488    4015 notify.go:220] Checking for updates...
	I1025 18:39:31.084648    4015 config.go:182] Loaded profile config "multinode-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:39:31.084656    4015 status.go:174] checking status of multinode-293000 ...
	I1025 18:39:31.084911    4015 status.go:371] multinode-293000 host status = "Stopped" (err=<nil>)
	I1025 18:39:31.084914    4015 status.go:384] host is not running, skipping remaining checks
	I1025 18:39:31.084916    4015 status.go:176] multinode-293000 status: &{Name:multinode-293000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-293000 status --alsologtostderr": multinode-293000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000: exit status 7 (33.877959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (48.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-293000 node start m03 -v=7 --alsologtostderr: exit status 85 (50.855208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:39:31.152356    4019 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:39:31.152646    4019 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:31.152649    4019 out.go:358] Setting ErrFile to fd 2...
	I1025 18:39:31.152651    4019 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:31.152794    4019 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:39:31.153029    4019 mustload.go:65] Loading cluster: multinode-293000
	I1025 18:39:31.153215    4019 config.go:182] Loaded profile config "multinode-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:39:31.157999    4019 out.go:201] 
	W1025 18:39:31.160903    4019 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1025 18:39:31.160908    4019 out.go:270] * 
	* 
	W1025 18:39:31.162344    4019 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:39:31.165863    4019 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1025 18:39:31.152356    4019 out.go:345] Setting OutFile to fd 1 ...
I1025 18:39:31.152646    4019 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 18:39:31.152649    4019 out.go:358] Setting ErrFile to fd 2...
I1025 18:39:31.152651    4019 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 18:39:31.152794    4019 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
I1025 18:39:31.153029    4019 mustload.go:65] Loading cluster: multinode-293000
I1025 18:39:31.153215    4019 config.go:182] Loaded profile config "multinode-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1025 18:39:31.157999    4019 out.go:201] 
W1025 18:39:31.160903    4019 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1025 18:39:31.160908    4019 out.go:270] * 
* 
W1025 18:39:31.162344    4019 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1025 18:39:31.165863    4019 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-293000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr: exit status 7 (34.520375ms)

                                                
                                                
-- stdout --
	multinode-293000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:39:31.203679    4021 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:39:31.203868    4021 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:31.203872    4021 out.go:358] Setting ErrFile to fd 2...
	I1025 18:39:31.203874    4021 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:31.204009    4021 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:39:31.204140    4021 out.go:352] Setting JSON to false
	I1025 18:39:31.204150    4021 mustload.go:65] Loading cluster: multinode-293000
	I1025 18:39:31.204206    4021 notify.go:220] Checking for updates...
	I1025 18:39:31.204368    4021 config.go:182] Loaded profile config "multinode-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:39:31.204377    4021 status.go:174] checking status of multinode-293000 ...
	I1025 18:39:31.204626    4021 status.go:371] multinode-293000 host status = "Stopped" (err=<nil>)
	I1025 18:39:31.204630    4021 status.go:384] host is not running, skipping remaining checks
	I1025 18:39:31.204632    4021 status.go:176] multinode-293000 status: &{Name:multinode-293000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 18:39:31.205570    1672 retry.go:31] will retry after 859.664695ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr: exit status 7 (78.71975ms)

                                                
                                                
-- stdout --
	multinode-293000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:39:32.144015    4023 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:39:32.144238    4023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:32.144243    4023 out.go:358] Setting ErrFile to fd 2...
	I1025 18:39:32.144246    4023 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:32.144419    4023 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:39:32.144589    4023 out.go:352] Setting JSON to false
	I1025 18:39:32.144603    4023 mustload.go:65] Loading cluster: multinode-293000
	I1025 18:39:32.144631    4023 notify.go:220] Checking for updates...
	I1025 18:39:32.144877    4023 config.go:182] Loaded profile config "multinode-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:39:32.144887    4023 status.go:174] checking status of multinode-293000 ...
	I1025 18:39:32.145212    4023 status.go:371] multinode-293000 host status = "Stopped" (err=<nil>)
	I1025 18:39:32.145217    4023 status.go:384] host is not running, skipping remaining checks
	I1025 18:39:32.145219    4023 status.go:176] multinode-293000 status: &{Name:multinode-293000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 18:39:32.146259    1672 retry.go:31] will retry after 2.103981237s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr: exit status 7 (55.3135ms)

                                                
                                                
-- stdout --
	multinode-293000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:39:34.304964    4026 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:39:34.305154    4026 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:34.305158    4026 out.go:358] Setting ErrFile to fd 2...
	I1025 18:39:34.305161    4026 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:34.305304    4026 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:39:34.305456    4026 out.go:352] Setting JSON to false
	I1025 18:39:34.305469    4026 mustload.go:65] Loading cluster: multinode-293000
	I1025 18:39:34.305514    4026 notify.go:220] Checking for updates...
	I1025 18:39:34.305708    4026 config.go:182] Loaded profile config "multinode-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:39:34.305716    4026 status.go:174] checking status of multinode-293000 ...
	I1025 18:39:34.305997    4026 status.go:371] multinode-293000 host status = "Stopped" (err=<nil>)
	I1025 18:39:34.306001    4026 status.go:384] host is not running, skipping remaining checks
	I1025 18:39:34.306003    4026 status.go:176] multinode-293000 status: &{Name:multinode-293000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 18:39:34.307059    1672 retry.go:31] will retry after 1.843981506s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr: exit status 7 (80.031917ms)

                                                
                                                
-- stdout --
	multinode-293000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:39:36.231264    4028 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:39:36.231490    4028 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:36.231494    4028 out.go:358] Setting ErrFile to fd 2...
	I1025 18:39:36.231497    4028 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:36.231695    4028 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:39:36.231838    4028 out.go:352] Setting JSON to false
	I1025 18:39:36.231851    4028 mustload.go:65] Loading cluster: multinode-293000
	I1025 18:39:36.231896    4028 notify.go:220] Checking for updates...
	I1025 18:39:36.232130    4028 config.go:182] Loaded profile config "multinode-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:39:36.232139    4028 status.go:174] checking status of multinode-293000 ...
	I1025 18:39:36.232433    4028 status.go:371] multinode-293000 host status = "Stopped" (err=<nil>)
	I1025 18:39:36.232438    4028 status.go:384] host is not running, skipping remaining checks
	I1025 18:39:36.232440    4028 status.go:176] multinode-293000 status: &{Name:multinode-293000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 18:39:36.233451    1672 retry.go:31] will retry after 3.017688398s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr: exit status 7 (79.012625ms)

                                                
                                                
-- stdout --
	multinode-293000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:39:39.330378    4030 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:39:39.330607    4030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:39.330611    4030 out.go:358] Setting ErrFile to fd 2...
	I1025 18:39:39.330613    4030 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:39.330784    4030 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:39:39.330923    4030 out.go:352] Setting JSON to false
	I1025 18:39:39.330935    4030 mustload.go:65] Loading cluster: multinode-293000
	I1025 18:39:39.330967    4030 notify.go:220] Checking for updates...
	I1025 18:39:39.331188    4030 config.go:182] Loaded profile config "multinode-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:39:39.331197    4030 status.go:174] checking status of multinode-293000 ...
	I1025 18:39:39.331494    4030 status.go:371] multinode-293000 host status = "Stopped" (err=<nil>)
	I1025 18:39:39.331499    4030 status.go:384] host is not running, skipping remaining checks
	I1025 18:39:39.331501    4030 status.go:176] multinode-293000 status: &{Name:multinode-293000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 18:39:39.332519    1672 retry.go:31] will retry after 6.396245178s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr: exit status 7 (79.670584ms)

                                                
                                                
-- stdout --
	multinode-293000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:39:45.808561    4032 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:39:45.808761    4032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:45.808765    4032 out.go:358] Setting ErrFile to fd 2...
	I1025 18:39:45.808768    4032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:45.808944    4032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:39:45.809103    4032 out.go:352] Setting JSON to false
	I1025 18:39:45.809115    4032 mustload.go:65] Loading cluster: multinode-293000
	I1025 18:39:45.809155    4032 notify.go:220] Checking for updates...
	I1025 18:39:45.809370    4032 config.go:182] Loaded profile config "multinode-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:39:45.809379    4032 status.go:174] checking status of multinode-293000 ...
	I1025 18:39:45.809676    4032 status.go:371] multinode-293000 host status = "Stopped" (err=<nil>)
	I1025 18:39:45.809681    4032 status.go:384] host is not running, skipping remaining checks
	I1025 18:39:45.809684    4032 status.go:176] multinode-293000 status: &{Name:multinode-293000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 18:39:45.810738    1672 retry.go:31] will retry after 11.014641918s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr: exit status 7 (77.481292ms)

                                                
                                                
-- stdout --
	multinode-293000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:39:56.902918    4039 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:39:56.903172    4039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:56.903177    4039 out.go:358] Setting ErrFile to fd 2...
	I1025 18:39:56.903180    4039 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:39:56.903346    4039 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:39:56.903498    4039 out.go:352] Setting JSON to false
	I1025 18:39:56.903512    4039 mustload.go:65] Loading cluster: multinode-293000
	I1025 18:39:56.903549    4039 notify.go:220] Checking for updates...
	I1025 18:39:56.903780    4039 config.go:182] Loaded profile config "multinode-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:39:56.903790    4039 status.go:174] checking status of multinode-293000 ...
	I1025 18:39:56.904126    4039 status.go:371] multinode-293000 host status = "Stopped" (err=<nil>)
	I1025 18:39:56.904130    4039 status.go:384] host is not running, skipping remaining checks
	I1025 18:39:56.904133    4039 status.go:176] multinode-293000 status: &{Name:multinode-293000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 18:39:56.905160    1672 retry.go:31] will retry after 9.506199179s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr: exit status 7 (78.298834ms)

                                                
                                                
-- stdout --
	multinode-293000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:40:06.488869    4043 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:40:06.489100    4043 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:40:06.489105    4043 out.go:358] Setting ErrFile to fd 2...
	I1025 18:40:06.489108    4043 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:40:06.489264    4043 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:40:06.489450    4043 out.go:352] Setting JSON to false
	I1025 18:40:06.489465    4043 mustload.go:65] Loading cluster: multinode-293000
	I1025 18:40:06.489520    4043 notify.go:220] Checking for updates...
	I1025 18:40:06.489726    4043 config.go:182] Loaded profile config "multinode-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:40:06.489735    4043 status.go:174] checking status of multinode-293000 ...
	I1025 18:40:06.490054    4043 status.go:371] multinode-293000 host status = "Stopped" (err=<nil>)
	I1025 18:40:06.490059    4043 status.go:384] host is not running, skipping remaining checks
	I1025 18:40:06.490061    4043 status.go:176] multinode-293000 status: &{Name:multinode-293000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1025 18:40:06.491082    1672 retry.go:31] will retry after 12.625533722s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr: exit status 7 (80.142417ms)

                                                
                                                
-- stdout --
	multinode-293000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:40:19.196759    4045 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:40:19.196972    4045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:40:19.196976    4045 out.go:358] Setting ErrFile to fd 2...
	I1025 18:40:19.196979    4045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:40:19.197139    4045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:40:19.197291    4045 out.go:352] Setting JSON to false
	I1025 18:40:19.197303    4045 mustload.go:65] Loading cluster: multinode-293000
	I1025 18:40:19.197337    4045 notify.go:220] Checking for updates...
	I1025 18:40:19.197546    4045 config.go:182] Loaded profile config "multinode-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:40:19.197555    4045 status.go:174] checking status of multinode-293000 ...
	I1025 18:40:19.197846    4045 status.go:371] multinode-293000 host status = "Stopped" (err=<nil>)
	I1025 18:40:19.197851    4045 status.go:384] host is not running, skipping remaining checks
	I1025 18:40:19.197853    4045 status.go:176] multinode-293000 status: &{Name:multinode-293000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-293000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000: exit status 7 (35.707417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (48.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-293000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-293000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-293000: (3.112397166s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-293000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-293000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.221238708s)

                                                
                                                
-- stdout --
	* [multinode-293000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-293000" primary control-plane node in "multinode-293000" cluster
	* Restarting existing qemu2 VM for "multinode-293000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-293000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:40:22.447133    4071 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:40:22.447345    4071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:40:22.447353    4071 out.go:358] Setting ErrFile to fd 2...
	I1025 18:40:22.447357    4071 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:40:22.447508    4071 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:40:22.448703    4071 out.go:352] Setting JSON to false
	I1025 18:40:22.467984    4071 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4193,"bootTime":1729902629,"procs":557,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:40:22.468060    4071 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:40:22.472695    4071 out.go:177] * [multinode-293000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:40:22.479690    4071 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:40:22.479732    4071 notify.go:220] Checking for updates...
	I1025 18:40:22.486668    4071 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:40:22.489642    4071 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:40:22.492715    4071 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:40:22.495648    4071 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:40:22.498720    4071 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:40:22.501956    4071 config.go:182] Loaded profile config "multinode-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:40:22.502012    4071 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:40:22.506647    4071 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 18:40:22.513556    4071 start.go:297] selected driver: qemu2
	I1025 18:40:22.513561    4071 start.go:901] validating driver "qemu2" against &{Name:multinode-293000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:40:22.513605    4071 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:40:22.516085    4071 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:40:22.516110    4071 cni.go:84] Creating CNI manager for ""
	I1025 18:40:22.516132    4071 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1025 18:40:22.516178    4071 start.go:340] cluster config:
	{Name:multinode-293000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-293000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:40:22.520639    4071 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:40:22.527607    4071 out.go:177] * Starting "multinode-293000" primary control-plane node in "multinode-293000" cluster
	I1025 18:40:22.531636    4071 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:40:22.531649    4071 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:40:22.531658    4071 cache.go:56] Caching tarball of preloaded images
	I1025 18:40:22.531747    4071 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:40:22.531752    4071 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:40:22.531804    4071 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/multinode-293000/config.json ...
	I1025 18:40:22.532219    4071 start.go:360] acquireMachinesLock for multinode-293000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:40:22.532273    4071 start.go:364] duration metric: took 48.125µs to acquireMachinesLock for "multinode-293000"
	I1025 18:40:22.532282    4071 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:40:22.532288    4071 fix.go:54] fixHost starting: 
	I1025 18:40:22.532411    4071 fix.go:112] recreateIfNeeded on multinode-293000: state=Stopped err=<nil>
	W1025 18:40:22.532419    4071 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:40:22.535616    4071 out.go:177] * Restarting existing qemu2 VM for "multinode-293000" ...
	I1025 18:40:22.543674    4071 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:40:22.543719    4071 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:91:a0:1a:fc:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/disk.qcow2
	I1025 18:40:22.545910    4071 main.go:141] libmachine: STDOUT: 
	I1025 18:40:22.545928    4071 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:40:22.545956    4071 fix.go:56] duration metric: took 13.667417ms for fixHost
	I1025 18:40:22.545961    4071 start.go:83] releasing machines lock for "multinode-293000", held for 13.683167ms
	W1025 18:40:22.545967    4071 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:40:22.546006    4071 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:40:22.546010    4071 start.go:729] Will try again in 5 seconds ...
	I1025 18:40:27.548090    4071 start.go:360] acquireMachinesLock for multinode-293000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:40:27.548491    4071 start.go:364] duration metric: took 309.75µs to acquireMachinesLock for "multinode-293000"
	I1025 18:40:27.548629    4071 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:40:27.548653    4071 fix.go:54] fixHost starting: 
	I1025 18:40:27.549392    4071 fix.go:112] recreateIfNeeded on multinode-293000: state=Stopped err=<nil>
	W1025 18:40:27.549418    4071 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:40:27.553874    4071 out.go:177] * Restarting existing qemu2 VM for "multinode-293000" ...
	I1025 18:40:27.557824    4071 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:40:27.558039    4071 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:91:a0:1a:fc:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/disk.qcow2
	I1025 18:40:27.567497    4071 main.go:141] libmachine: STDOUT: 
	I1025 18:40:27.567548    4071 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:40:27.567611    4071 fix.go:56] duration metric: took 18.964417ms for fixHost
	I1025 18:40:27.567625    4071 start.go:83] releasing machines lock for "multinode-293000", held for 19.09375ms
	W1025 18:40:27.567812    4071 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-293000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-293000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:40:27.575869    4071 out.go:201] 
	W1025 18:40:27.578840    4071 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:40:27.578883    4071 out.go:270] * 
	* 
	W1025 18:40:27.581566    4071 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:40:27.590872    4071 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-293000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-293000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000: exit status 7 (36.096666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-293000 node delete m03: exit status 83 (44.476209ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-293000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-293000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-293000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-293000 status --alsologtostderr: exit status 7 (33.604125ms)

                                                
                                                
-- stdout --
	multinode-293000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:40:27.792224    4085 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:40:27.792396    4085 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:40:27.792399    4085 out.go:358] Setting ErrFile to fd 2...
	I1025 18:40:27.792401    4085 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:40:27.792518    4085 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:40:27.792631    4085 out.go:352] Setting JSON to false
	I1025 18:40:27.792642    4085 mustload.go:65] Loading cluster: multinode-293000
	I1025 18:40:27.792702    4085 notify.go:220] Checking for updates...
	I1025 18:40:27.792852    4085 config.go:182] Loaded profile config "multinode-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:40:27.792861    4085 status.go:174] checking status of multinode-293000 ...
	I1025 18:40:27.793114    4085 status.go:371] multinode-293000 host status = "Stopped" (err=<nil>)
	I1025 18:40:27.793117    4085 status.go:384] host is not running, skipping remaining checks
	I1025 18:40:27.793119    4085 status.go:176] multinode-293000 status: &{Name:multinode-293000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-293000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000: exit status 7 (34.256042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-293000 stop: (2.045684625s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-293000 status: exit status 7 (64.110375ms)

                                                
                                                
-- stdout --
	multinode-293000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-293000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-293000 status --alsologtostderr: exit status 7 (35.973291ms)

                                                
                                                
-- stdout --
	multinode-293000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:40:29.972677    4103 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:40:29.972873    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:40:29.972877    4103 out.go:358] Setting ErrFile to fd 2...
	I1025 18:40:29.972879    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:40:29.973025    4103 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:40:29.973159    4103 out.go:352] Setting JSON to false
	I1025 18:40:29.973170    4103 mustload.go:65] Loading cluster: multinode-293000
	I1025 18:40:29.973215    4103 notify.go:220] Checking for updates...
	I1025 18:40:29.973394    4103 config.go:182] Loaded profile config "multinode-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:40:29.973407    4103 status.go:174] checking status of multinode-293000 ...
	I1025 18:40:29.973656    4103 status.go:371] multinode-293000 host status = "Stopped" (err=<nil>)
	I1025 18:40:29.973659    4103 status.go:384] host is not running, skipping remaining checks
	I1025 18:40:29.973661    4103 status.go:176] multinode-293000 status: &{Name:multinode-293000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-293000 status --alsologtostderr": multinode-293000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-293000 status --alsologtostderr": multinode-293000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000: exit status 7 (34.553708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-293000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-293000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.194056125s)

                                                
                                                
-- stdout --
	* [multinode-293000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-293000" primary control-plane node in "multinode-293000" cluster
	* Restarting existing qemu2 VM for "multinode-293000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-293000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:40:30.041004    4107 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:40:30.041183    4107 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:40:30.041187    4107 out.go:358] Setting ErrFile to fd 2...
	I1025 18:40:30.041189    4107 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:40:30.041324    4107 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:40:30.042386    4107 out.go:352] Setting JSON to false
	I1025 18:40:30.059991    4107 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4201,"bootTime":1729902629,"procs":555,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:40:30.060057    4107 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:40:30.065462    4107 out.go:177] * [multinode-293000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:40:30.072404    4107 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:40:30.072455    4107 notify.go:220] Checking for updates...
	I1025 18:40:30.079434    4107 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:40:30.082416    4107 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:40:30.085371    4107 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:40:30.088453    4107 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:40:30.091405    4107 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:40:30.094704    4107 config.go:182] Loaded profile config "multinode-293000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:40:30.094970    4107 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:40:30.099403    4107 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 18:40:30.106380    4107 start.go:297] selected driver: qemu2
	I1025 18:40:30.106387    4107 start.go:901] validating driver "qemu2" against &{Name:multinode-293000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:40:30.106450    4107 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:40:30.109053    4107 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:40:30.109077    4107 cni.go:84] Creating CNI manager for ""
	I1025 18:40:30.109098    4107 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1025 18:40:30.109136    4107 start.go:340] cluster config:
	{Name:multinode-293000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-293000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:40:30.113760    4107 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:40:30.122424    4107 out.go:177] * Starting "multinode-293000" primary control-plane node in "multinode-293000" cluster
	I1025 18:40:30.126398    4107 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:40:30.126412    4107 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:40:30.126418    4107 cache.go:56] Caching tarball of preloaded images
	I1025 18:40:30.126470    4107 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:40:30.126476    4107 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:40:30.126527    4107 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/multinode-293000/config.json ...
	I1025 18:40:30.126941    4107 start.go:360] acquireMachinesLock for multinode-293000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:40:30.126971    4107 start.go:364] duration metric: took 24.125µs to acquireMachinesLock for "multinode-293000"
	I1025 18:40:30.126980    4107 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:40:30.126985    4107 fix.go:54] fixHost starting: 
	I1025 18:40:30.127105    4107 fix.go:112] recreateIfNeeded on multinode-293000: state=Stopped err=<nil>
	W1025 18:40:30.127115    4107 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:40:30.135415    4107 out.go:177] * Restarting existing qemu2 VM for "multinode-293000" ...
	I1025 18:40:30.139223    4107 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:40:30.139260    4107 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:91:a0:1a:fc:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/disk.qcow2
	I1025 18:40:30.141556    4107 main.go:141] libmachine: STDOUT: 
	I1025 18:40:30.141576    4107 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:40:30.141607    4107 fix.go:56] duration metric: took 14.621583ms for fixHost
	I1025 18:40:30.141613    4107 start.go:83] releasing machines lock for "multinode-293000", held for 14.637416ms
	W1025 18:40:30.141620    4107 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:40:30.141669    4107 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:40:30.141674    4107 start.go:729] Will try again in 5 seconds ...
	I1025 18:40:35.143895    4107 start.go:360] acquireMachinesLock for multinode-293000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:40:35.144364    4107 start.go:364] duration metric: took 351.791µs to acquireMachinesLock for "multinode-293000"
	I1025 18:40:35.144524    4107 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:40:35.144545    4107 fix.go:54] fixHost starting: 
	I1025 18:40:35.145329    4107 fix.go:112] recreateIfNeeded on multinode-293000: state=Stopped err=<nil>
	W1025 18:40:35.145357    4107 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:40:35.153804    4107 out.go:177] * Restarting existing qemu2 VM for "multinode-293000" ...
	I1025 18:40:35.157904    4107 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:40:35.158161    4107 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:91:a0:1a:fc:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/multinode-293000/disk.qcow2
	I1025 18:40:35.168388    4107 main.go:141] libmachine: STDOUT: 
	I1025 18:40:35.168449    4107 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:40:35.168547    4107 fix.go:56] duration metric: took 24.004542ms for fixHost
	I1025 18:40:35.168569    4107 start.go:83] releasing machines lock for "multinode-293000", held for 24.178583ms
	W1025 18:40:35.168740    4107 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-293000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-293000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:40:35.175839    4107 out.go:201] 
	W1025 18:40:35.179959    4107 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:40:35.179980    4107 out.go:270] * 
	* 
	W1025 18:40:35.182539    4107 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:40:35.189866    4107 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-293000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000: exit status 7 (75.366ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-293000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-293000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-293000-m01 --driver=qemu2 : exit status 80 (9.847380334s)

                                                
                                                
-- stdout --
	* [multinode-293000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-293000-m01" primary control-plane node in "multinode-293000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-293000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-293000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-293000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-293000-m02 --driver=qemu2 : exit status 80 (10.027503958s)

                                                
                                                
-- stdout --
	* [multinode-293000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-293000-m02" primary control-plane node in "multinode-293000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-293000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-293000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-293000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-293000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-293000: exit status 83 (84.581042ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-293000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-293000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-293000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-293000 -n multinode-293000: exit status 7 (35.156458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-293000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.12s)

                                                
                                    
x
+
TestPreload (10.25s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-766000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-766000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.089418375s)

                                                
                                                
-- stdout --
	* [test-preload-766000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-766000" primary control-plane node in "test-preload-766000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-766000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:40:55.546231    4169 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:40:55.546379    4169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:40:55.546382    4169 out.go:358] Setting ErrFile to fd 2...
	I1025 18:40:55.546385    4169 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:40:55.546543    4169 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:40:55.547662    4169 out.go:352] Setting JSON to false
	I1025 18:40:55.565428    4169 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4226,"bootTime":1729902629,"procs":553,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:40:55.565506    4169 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:40:55.570811    4169 out.go:177] * [test-preload-766000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:40:55.578672    4169 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:40:55.578776    4169 notify.go:220] Checking for updates...
	I1025 18:40:55.585564    4169 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:40:55.588611    4169 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:40:55.591658    4169 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:40:55.593004    4169 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:40:55.595687    4169 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:40:55.598977    4169 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:40:55.599028    4169 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:40:55.603486    4169 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:40:55.610636    4169 start.go:297] selected driver: qemu2
	I1025 18:40:55.610643    4169 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:40:55.610651    4169 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:40:55.613092    4169 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:40:55.616651    4169 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:40:55.619735    4169 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:40:55.619758    4169 cni.go:84] Creating CNI manager for ""
	I1025 18:40:55.619780    4169 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:40:55.619788    4169 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 18:40:55.619829    4169 start.go:340] cluster config:
	{Name:test-preload-766000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-766000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:40:55.624492    4169 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:40:55.632585    4169 out.go:177] * Starting "test-preload-766000" primary control-plane node in "test-preload-766000" cluster
	I1025 18:40:55.636624    4169 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1025 18:40:55.636693    4169 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/test-preload-766000/config.json ...
	I1025 18:40:55.636711    4169 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/test-preload-766000/config.json: {Name:mk19c9c57793bc547006bc8aaa8b784d2db9400e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:40:55.636713    4169 cache.go:107] acquiring lock: {Name:mk3749aa17cfed9cec0374ffa4b00d003145c15a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:40:55.636711    4169 cache.go:107] acquiring lock: {Name:mkf1ce1b100d7ec92b05254016b8b3f3f6436310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:40:55.636744    4169 cache.go:107] acquiring lock: {Name:mk222577c55e1b63affd2db2070b81e6fc88c570 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:40:55.636745    4169 cache.go:107] acquiring lock: {Name:mk00375303fdeaa97600e96a449e12b3d1e48045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:40:55.636861    4169 cache.go:107] acquiring lock: {Name:mka9723e35afeb414d5aba0aee81c14f719a0087 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:40:55.636895    4169 cache.go:107] acquiring lock: {Name:mke62583c3b19853806cfc47808fc9ad45070099 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:40:55.636990    4169 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1025 18:40:55.637088    4169 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1025 18:40:55.637130    4169 start.go:360] acquireMachinesLock for test-preload-766000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:40:55.637147    4169 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1025 18:40:55.637158    4169 cache.go:107] acquiring lock: {Name:mk98664818f1933363a82eec6b96a5d41ad4776a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:40:55.637162    4169 cache.go:107] acquiring lock: {Name:mka6db199970552bf83b6cc29d88dc832685802a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:40:55.637191    4169 start.go:364] duration metric: took 52.5µs to acquireMachinesLock for "test-preload-766000"
	I1025 18:40:55.637199    4169 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1025 18:40:55.636995    4169 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1025 18:40:55.637205    4169 start.go:93] Provisioning new machine with config: &{Name:test-preload-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-766000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:40:55.637245    4169 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:40:55.637386    4169 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1025 18:40:55.637742    4169 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:40:55.641631    4169 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 18:40:55.642194    4169 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 18:40:55.650210    4169 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1025 18:40:55.650259    4169 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1025 18:40:55.650335    4169 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1025 18:40:55.650757    4169 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1025 18:40:55.650818    4169 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1025 18:40:55.650843    4169 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1025 18:40:55.652170    4169 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 18:40:55.652185    4169 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:40:55.658789    4169 start.go:159] libmachine.API.Create for "test-preload-766000" (driver="qemu2")
	I1025 18:40:55.658810    4169 client.go:168] LocalClient.Create starting
	I1025 18:40:55.658878    4169 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:40:55.658918    4169 main.go:141] libmachine: Decoding PEM data...
	I1025 18:40:55.658928    4169 main.go:141] libmachine: Parsing certificate...
	I1025 18:40:55.658964    4169 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:40:55.658993    4169 main.go:141] libmachine: Decoding PEM data...
	I1025 18:40:55.659000    4169 main.go:141] libmachine: Parsing certificate...
	I1025 18:40:55.659343    4169 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:40:55.829906    4169 main.go:141] libmachine: Creating SSH key...
	I1025 18:40:56.028122    4169 main.go:141] libmachine: Creating Disk image...
	I1025 18:40:56.028138    4169 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:40:56.028330    4169 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/test-preload-766000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/test-preload-766000/disk.qcow2
	I1025 18:40:56.038647    4169 main.go:141] libmachine: STDOUT: 
	I1025 18:40:56.038674    4169 main.go:141] libmachine: STDERR: 
	I1025 18:40:56.038743    4169 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/test-preload-766000/disk.qcow2 +20000M
	I1025 18:40:56.048439    4169 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:40:56.048464    4169 main.go:141] libmachine: STDERR: 
	I1025 18:40:56.048483    4169 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/test-preload-766000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/test-preload-766000/disk.qcow2
	I1025 18:40:56.048487    4169 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:40:56.048498    4169 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:40:56.048532    4169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/test-preload-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/test-preload-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/test-preload-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:32:b3:86:99:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/test-preload-766000/disk.qcow2
	I1025 18:40:56.050794    4169 main.go:141] libmachine: STDOUT: 
	I1025 18:40:56.050817    4169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:40:56.050848    4169 client.go:171] duration metric: took 392.039708ms to LocalClient.Create
	I1025 18:40:56.223799    4169 cache.go:162] opening:  /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1025 18:40:56.230901    4169 cache.go:162] opening:  /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1025 18:40:56.253124    4169 cache.go:162] opening:  /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1025 18:40:56.340283    4169 cache.go:162] opening:  /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1025 18:40:56.341918    4169 cache.go:162] opening:  /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1025 18:40:56.428651    4169 cache.go:162] opening:  /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W1025 18:40:56.487806    4169 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1025 18:40:56.487884    4169 cache.go:162] opening:  /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1025 18:40:56.498976    4169 cache.go:157] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1025 18:40:56.498997    4169 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 862.268583ms
	I1025 18:40:56.499020    4169 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W1025 18:40:57.166427    4169 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1025 18:40:57.166536    4169 cache.go:162] opening:  /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1025 18:40:57.613699    4169 cache.go:157] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1025 18:40:57.613754    4169 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 1.976747792s
	I1025 18:40:57.613782    4169 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1025 18:40:57.633712    4169 cache.go:157] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 18:40:57.633758    4169 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.997084667s
	I1025 18:40:57.633785    4169 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 18:40:58.051072    4169 start.go:128] duration metric: took 2.4138395s to createHost
	I1025 18:40:58.051135    4169 start.go:83] releasing machines lock for "test-preload-766000", held for 2.413983792s
	W1025 18:40:58.051193    4169 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:40:58.061206    4169 out.go:177] * Deleting "test-preload-766000" in qemu2 ...
	W1025 18:40:58.088531    4169 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:40:58.088559    4169 start.go:729] Will try again in 5 seconds ...
	I1025 18:40:59.673765    4169 cache.go:157] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1025 18:40:59.673825    4169 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.037062583s
	I1025 18:40:59.673854    4169 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1025 18:41:00.537872    4169 cache.go:157] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1025 18:41:00.537926    4169 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.901300833s
	I1025 18:41:00.537953    4169 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1025 18:41:00.729133    4169 cache.go:157] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1025 18:41:00.729175    4169 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.09257025s
	I1025 18:41:00.729197    4169 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1025 18:41:02.677052    4169 cache.go:157] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1025 18:41:02.677097    4169 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.040392375s
	I1025 18:41:02.677123    4169 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1025 18:41:03.088911    4169 start.go:360] acquireMachinesLock for test-preload-766000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:41:03.089412    4169 start.go:364] duration metric: took 430.125µs to acquireMachinesLock for "test-preload-766000"
	I1025 18:41:03.089534    4169 start.go:93] Provisioning new machine with config: &{Name:test-preload-766000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-766000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:41:03.089777    4169 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:41:03.096450    4169 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 18:41:03.145944    4169 start.go:159] libmachine.API.Create for "test-preload-766000" (driver="qemu2")
	I1025 18:41:03.145991    4169 client.go:168] LocalClient.Create starting
	I1025 18:41:03.146126    4169 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:41:03.146268    4169 main.go:141] libmachine: Decoding PEM data...
	I1025 18:41:03.146291    4169 main.go:141] libmachine: Parsing certificate...
	I1025 18:41:03.146363    4169 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:41:03.146419    4169 main.go:141] libmachine: Decoding PEM data...
	I1025 18:41:03.146433    4169 main.go:141] libmachine: Parsing certificate...
	I1025 18:41:03.147033    4169 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:41:03.321850    4169 main.go:141] libmachine: Creating SSH key...
	I1025 18:41:03.536087    4169 main.go:141] libmachine: Creating Disk image...
	I1025 18:41:03.536098    4169 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:41:03.536315    4169 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/test-preload-766000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/test-preload-766000/disk.qcow2
	I1025 18:41:03.546805    4169 main.go:141] libmachine: STDOUT: 
	I1025 18:41:03.546830    4169 main.go:141] libmachine: STDERR: 
	I1025 18:41:03.546902    4169 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/test-preload-766000/disk.qcow2 +20000M
	I1025 18:41:03.555712    4169 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:41:03.555726    4169 main.go:141] libmachine: STDERR: 
	I1025 18:41:03.555740    4169 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/test-preload-766000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/test-preload-766000/disk.qcow2
	I1025 18:41:03.555749    4169 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:41:03.555757    4169 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:41:03.555788    4169 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/test-preload-766000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/test-preload-766000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/test-preload-766000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:7c:5f:d3:b3:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/test-preload-766000/disk.qcow2
	I1025 18:41:03.557751    4169 main.go:141] libmachine: STDOUT: 
	I1025 18:41:03.557766    4169 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:41:03.557788    4169 client.go:171] duration metric: took 411.801375ms to LocalClient.Create
	I1025 18:41:05.558467    4169 start.go:128] duration metric: took 2.468655583s to createHost
	I1025 18:41:05.558502    4169 start.go:83] releasing machines lock for "test-preload-766000", held for 2.469116667s
	W1025 18:41:05.558754    4169 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-766000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-766000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:41:05.569644    4169 out.go:201] 
	W1025 18:41:05.574762    4169 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:41:05.574784    4169 out.go:270] * 
	* 
	W1025 18:41:05.577345    4169 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:41:05.586755    4169 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-766000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-10-25 18:41:05.604507 -0700 PDT m=+3506.211106793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-766000 -n test-preload-766000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-766000 -n test-preload-766000: exit status 7 (73.701125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-766000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-766000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-766000
--- FAIL: TestPreload (10.25s)

                                                
                                    
x
+
TestScheduledStopUnix (10.2s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-441000 --memory=2048 --driver=qemu2 
E1025 18:41:07.339985    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-441000 --memory=2048 --driver=qemu2 : exit status 80 (10.042895708s)

                                                
                                                
-- stdout --
	* [scheduled-stop-441000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-441000" primary control-plane node in "scheduled-stop-441000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-441000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-441000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-441000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-441000" primary control-plane node in "scheduled-stop-441000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-441000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-441000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-10-25 18:41:15.804823 -0700 PDT m=+3516.411634834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-441000 -n scheduled-stop-441000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-441000 -n scheduled-stop-441000: exit status 7 (76.877125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-441000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-441000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-441000
--- FAIL: TestScheduledStopUnix (10.20s)

                                                
                                    
x
+
TestSkaffold (12.63s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1112465848 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1112465848 version: (1.012782416s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-837000 --memory=2600 --driver=qemu2 
E1025 18:41:24.237210    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-837000 --memory=2600 --driver=qemu2 : exit status 80 (9.775943375s)

                                                
                                                
-- stdout --
	* [skaffold-837000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-837000" primary control-plane node in "skaffold-837000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-837000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-837000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-837000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-837000" primary control-plane node in "skaffold-837000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-837000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-837000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-10-25 18:41:28.441315 -0700 PDT m=+3529.048389793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-837000 -n skaffold-837000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-837000 -n skaffold-837000: exit status 7 (68.701042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-837000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-837000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-837000
--- FAIL: TestSkaffold (12.63s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (599.16s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2374657587 start -p running-upgrade-889000 --memory=2200 --vm-driver=qemu2 
E1025 18:42:55.557000    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2374657587 start -p running-upgrade-889000 --memory=2200 --vm-driver=qemu2 : (52.849266959s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-889000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-889000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m32.096946917s)

                                                
                                                
-- stdout --
	* [running-upgrade-889000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-889000" primary control-plane node in "running-upgrade-889000" cluster
	* Updating the running qemu2 "running-upgrade-889000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:43:05.190768    4599 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:43:05.191087    4599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:43:05.191091    4599 out.go:358] Setting ErrFile to fd 2...
	I1025 18:43:05.191093    4599 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:43:05.191230    4599 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:43:05.192214    4599 out.go:352] Setting JSON to false
	I1025 18:43:05.211152    4599 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4356,"bootTime":1729902629,"procs":561,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:43:05.211247    4599 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:43:05.216138    4599 out.go:177] * [running-upgrade-889000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:43:05.224124    4599 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:43:05.224152    4599 notify.go:220] Checking for updates...
	I1025 18:43:05.231056    4599 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:43:05.235054    4599 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:43:05.238026    4599 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:43:05.241119    4599 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:43:05.244101    4599 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:43:05.247268    4599 config.go:182] Loaded profile config "running-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:43:05.250028    4599 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1025 18:43:05.253086    4599 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:43:05.257053    4599 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 18:43:05.264057    4599 start.go:297] selected driver: qemu2
	I1025 18:43:05.264063    4599 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62322 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1025 18:43:05.264106    4599 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:43:05.266848    4599 cni.go:84] Creating CNI manager for ""
	I1025 18:43:05.266882    4599 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:43:05.266902    4599 start.go:340] cluster config:
	{Name:running-upgrade-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62322 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1025 18:43:05.266948    4599 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:43:05.275044    4599 out.go:177] * Starting "running-upgrade-889000" primary control-plane node in "running-upgrade-889000" cluster
	I1025 18:43:05.278998    4599 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1025 18:43:05.279030    4599 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1025 18:43:05.279037    4599 cache.go:56] Caching tarball of preloaded images
	I1025 18:43:05.279114    4599 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:43:05.279121    4599 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1025 18:43:05.279176    4599 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/config.json ...
	I1025 18:43:05.279499    4599 start.go:360] acquireMachinesLock for running-upgrade-889000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:43:05.279529    4599 start.go:364] duration metric: took 23.5µs to acquireMachinesLock for "running-upgrade-889000"
	I1025 18:43:05.279537    4599 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:43:05.279542    4599 fix.go:54] fixHost starting: 
	I1025 18:43:05.280155    4599 fix.go:112] recreateIfNeeded on running-upgrade-889000: state=Running err=<nil>
	W1025 18:43:05.280165    4599 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:43:05.284156    4599 out.go:177] * Updating the running qemu2 "running-upgrade-889000" VM ...
	I1025 18:43:05.292046    4599 machine.go:93] provisionDockerMachine start ...
	I1025 18:43:05.292094    4599 main.go:141] libmachine: Using SSH client type: native
	I1025 18:43:05.292214    4599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b925f0] 0x104b94e30 <nil>  [] 0s} localhost 62290 <nil> <nil>}
	I1025 18:43:05.292219    4599 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 18:43:05.351631    4599 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-889000
	
	I1025 18:43:05.351647    4599 buildroot.go:166] provisioning hostname "running-upgrade-889000"
	I1025 18:43:05.351708    4599 main.go:141] libmachine: Using SSH client type: native
	I1025 18:43:05.351817    4599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b925f0] 0x104b94e30 <nil>  [] 0s} localhost 62290 <nil> <nil>}
	I1025 18:43:05.351825    4599 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-889000 && echo "running-upgrade-889000" | sudo tee /etc/hostname
	I1025 18:43:05.409872    4599 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-889000
	
	I1025 18:43:05.409934    4599 main.go:141] libmachine: Using SSH client type: native
	I1025 18:43:05.410048    4599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b925f0] 0x104b94e30 <nil>  [] 0s} localhost 62290 <nil> <nil>}
	I1025 18:43:05.410058    4599 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-889000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-889000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-889000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 18:43:05.462491    4599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:43:05.462504    4599 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19868-1112/.minikube CaCertPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19868-1112/.minikube}
	I1025 18:43:05.462512    4599 buildroot.go:174] setting up certificates
	I1025 18:43:05.462516    4599 provision.go:84] configureAuth start
	I1025 18:43:05.462523    4599 provision.go:143] copyHostCerts
	I1025 18:43:05.462583    4599 exec_runner.go:144] found /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.pem, removing ...
	I1025 18:43:05.462589    4599 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.pem
	I1025 18:43:05.462717    4599 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.pem (1082 bytes)
	I1025 18:43:05.462911    4599 exec_runner.go:144] found /Users/jenkins/minikube-integration/19868-1112/.minikube/cert.pem, removing ...
	I1025 18:43:05.462915    4599 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19868-1112/.minikube/cert.pem
	I1025 18:43:05.462958    4599 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19868-1112/.minikube/cert.pem (1123 bytes)
	I1025 18:43:05.463059    4599 exec_runner.go:144] found /Users/jenkins/minikube-integration/19868-1112/.minikube/key.pem, removing ...
	I1025 18:43:05.463063    4599 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19868-1112/.minikube/key.pem
	I1025 18:43:05.463103    4599 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19868-1112/.minikube/key.pem (1675 bytes)
	I1025 18:43:05.463197    4599 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-889000 san=[127.0.0.1 localhost minikube running-upgrade-889000]
	I1025 18:43:05.571939    4599 provision.go:177] copyRemoteCerts
	I1025 18:43:05.571995    4599 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 18:43:05.572004    4599 sshutil.go:53] new ssh client: &{IP:localhost Port:62290 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/running-upgrade-889000/id_rsa Username:docker}
	I1025 18:43:05.602829    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 18:43:05.609667    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 18:43:05.616531    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 18:43:05.623464    4599 provision.go:87] duration metric: took 160.943458ms to configureAuth
	I1025 18:43:05.623475    4599 buildroot.go:189] setting minikube options for container-runtime
	I1025 18:43:05.623592    4599 config.go:182] Loaded profile config "running-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:43:05.623640    4599 main.go:141] libmachine: Using SSH client type: native
	I1025 18:43:05.623731    4599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b925f0] 0x104b94e30 <nil>  [] 0s} localhost 62290 <nil> <nil>}
	I1025 18:43:05.623737    4599 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 18:43:05.676690    4599 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1025 18:43:05.676703    4599 buildroot.go:70] root file system type: tmpfs
	I1025 18:43:05.676758    4599 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 18:43:05.676838    4599 main.go:141] libmachine: Using SSH client type: native
	I1025 18:43:05.676959    4599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b925f0] 0x104b94e30 <nil>  [] 0s} localhost 62290 <nil> <nil>}
	I1025 18:43:05.676992    4599 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 18:43:05.732412    4599 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 18:43:05.732475    4599 main.go:141] libmachine: Using SSH client type: native
	I1025 18:43:05.732593    4599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b925f0] 0x104b94e30 <nil>  [] 0s} localhost 62290 <nil> <nil>}
	I1025 18:43:05.732601    4599 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 18:43:05.784959    4599 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:43:05.784970    4599 machine.go:96] duration metric: took 492.928292ms to provisionDockerMachine
	I1025 18:43:05.784976    4599 start.go:293] postStartSetup for "running-upgrade-889000" (driver="qemu2")
	I1025 18:43:05.784982    4599 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 18:43:05.785040    4599 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 18:43:05.785052    4599 sshutil.go:53] new ssh client: &{IP:localhost Port:62290 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/running-upgrade-889000/id_rsa Username:docker}
	I1025 18:43:05.814102    4599 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 18:43:05.815959    4599 info.go:137] Remote host: Buildroot 2021.02.12
	I1025 18:43:05.815966    4599 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19868-1112/.minikube/addons for local assets ...
	I1025 18:43:05.816028    4599 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19868-1112/.minikube/files for local assets ...
	I1025 18:43:05.816121    4599 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19868-1112/.minikube/files/etc/ssl/certs/16722.pem -> 16722.pem in /etc/ssl/certs
	I1025 18:43:05.816241    4599 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 18:43:05.818941    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/files/etc/ssl/certs/16722.pem --> /etc/ssl/certs/16722.pem (1708 bytes)
	I1025 18:43:05.829170    4599 start.go:296] duration metric: took 44.186917ms for postStartSetup
	I1025 18:43:05.829192    4599 fix.go:56] duration metric: took 549.662208ms for fixHost
	I1025 18:43:05.829264    4599 main.go:141] libmachine: Using SSH client type: native
	I1025 18:43:05.829374    4599 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b925f0] 0x104b94e30 <nil>  [] 0s} localhost 62290 <nil> <nil>}
	I1025 18:43:05.829379    4599 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 18:43:05.884306    4599 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729906985.849167804
	
	I1025 18:43:05.884315    4599 fix.go:216] guest clock: 1729906985.849167804
	I1025 18:43:05.884319    4599 fix.go:229] Guest: 2024-10-25 18:43:05.849167804 -0700 PDT Remote: 2024-10-25 18:43:05.829193 -0700 PDT m=+0.660796292 (delta=19.974804ms)
	I1025 18:43:05.884334    4599 fix.go:200] guest clock delta is within tolerance: 19.974804ms
	I1025 18:43:05.884337    4599 start.go:83] releasing machines lock for "running-upgrade-889000", held for 604.816125ms
	I1025 18:43:05.884414    4599 ssh_runner.go:195] Run: cat /version.json
	I1025 18:43:05.884430    4599 sshutil.go:53] new ssh client: &{IP:localhost Port:62290 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/running-upgrade-889000/id_rsa Username:docker}
	I1025 18:43:05.884414    4599 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 18:43:05.884453    4599 sshutil.go:53] new ssh client: &{IP:localhost Port:62290 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/running-upgrade-889000/id_rsa Username:docker}
	W1025 18:43:05.884998    4599 sshutil.go:64] dial failure (will retry): dial tcp [::1]:62290: connect: connection refused
	I1025 18:43:05.885016    4599 retry.go:31] will retry after 170.692341ms: dial tcp [::1]:62290: connect: connection refused
	W1025 18:43:05.911298    4599 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1025 18:43:05.911340    4599 ssh_runner.go:195] Run: systemctl --version
	I1025 18:43:05.913182    4599 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 18:43:05.914914    4599 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 18:43:05.914949    4599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1025 18:43:05.917798    4599 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1025 18:43:05.924733    4599 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 18:43:05.924746    4599 start.go:495] detecting cgroup driver to use...
	I1025 18:43:05.924819    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:43:05.930306    4599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1025 18:43:05.933479    4599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 18:43:05.936390    4599 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 18:43:05.936425    4599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 18:43:05.940133    4599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:43:05.943705    4599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 18:43:05.946501    4599 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:43:05.949527    4599 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 18:43:05.952794    4599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 18:43:05.956420    4599 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1025 18:43:05.959783    4599 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1025 18:43:05.962916    4599 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 18:43:05.965744    4599 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 18:43:05.969028    4599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:43:06.067345    4599 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 18:43:06.073941    4599 start.go:495] detecting cgroup driver to use...
	I1025 18:43:06.074005    4599 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 18:43:06.081906    4599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 18:43:06.087160    4599 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 18:43:06.126891    4599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 18:43:06.131395    4599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 18:43:06.136126    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:43:06.141653    4599 ssh_runner.go:195] Run: which cri-dockerd
	I1025 18:43:06.143015    4599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 18:43:06.145797    4599 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 18:43:06.150986    4599 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 18:43:06.238987    4599 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 18:43:06.335902    4599 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 18:43:06.335966    4599 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 18:43:06.341337    4599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:43:06.434008    4599 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:43:19.279029    4599 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.845273333s)
	I1025 18:43:19.279114    4599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1025 18:43:19.284115    4599 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1025 18:43:19.293093    4599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1025 18:43:19.299112    4599 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 18:43:19.377320    4599 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:43:19.461322    4599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:43:19.540616    4599 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 18:43:19.546690    4599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1025 18:43:19.551955    4599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:43:19.633889    4599 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1025 18:43:19.673855    4599 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 18:43:19.674671    4599 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 18:43:19.677765    4599 start.go:563] Will wait 60s for crictl version
	I1025 18:43:19.677825    4599 ssh_runner.go:195] Run: which crictl
	I1025 18:43:19.679544    4599 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 18:43:19.692147    4599 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1025 18:43:19.692228    4599 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:43:19.704845    4599 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:43:19.722106    4599 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1025 18:43:19.722251    4599 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1025 18:43:19.723713    4599 kubeadm.go:883] updating cluster {Name:running-upgrade-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62322 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1025 18:43:19.723761    4599 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1025 18:43:19.723814    4599 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:43:19.734071    4599 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 18:43:19.734080    4599 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1025 18:43:19.734136    4599 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 18:43:19.737635    4599 ssh_runner.go:195] Run: which lz4
	I1025 18:43:19.738815    4599 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 18:43:19.740063    4599 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 18:43:19.740073    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1025 18:43:20.736400    4599 docker.go:653] duration metric: took 997.647459ms to copy over tarball
	I1025 18:43:20.736469    4599 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 18:43:21.844924    4599 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.108462833s)
	I1025 18:43:21.844938    4599 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 18:43:21.860890    4599 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 18:43:21.864556    4599 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1025 18:43:21.869797    4599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:43:21.952350    4599 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:43:23.140940    4599 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.188598625s)
	I1025 18:43:23.141046    4599 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:43:23.153758    4599 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 18:43:23.153770    4599 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1025 18:43:23.153775    4599 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 18:43:23.162109    4599 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:43:23.164152    4599 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1025 18:43:23.165528    4599 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:43:23.165720    4599 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1025 18:43:23.167319    4599 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 18:43:23.167330    4599 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1025 18:43:23.168437    4599 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1025 18:43:23.169535    4599 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1025 18:43:23.170081    4599 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1025 18:43:23.170276    4599 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 18:43:23.171002    4599 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1025 18:43:23.171494    4599 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1025 18:43:23.172645    4599 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1025 18:43:23.172773    4599 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 18:43:23.173625    4599 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1025 18:43:23.174491    4599 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 18:43:23.706956    4599 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1025 18:43:23.709932    4599 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1025 18:43:23.723440    4599 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1025 18:43:23.723473    4599 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1025 18:43:23.723527    4599 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1025 18:43:23.724361    4599 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 18:43:23.728287    4599 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1025 18:43:23.728309    4599 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1025 18:43:23.728390    4599 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1025 18:43:23.742639    4599 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1025 18:43:23.742663    4599 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1025 18:43:23.742685    4599 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 18:43:23.742744    4599 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 18:43:23.747269    4599 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1025 18:43:23.753471    4599 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1025 18:43:23.811562    4599 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1025 18:43:23.813031    4599 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1025 18:43:23.823390    4599 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1025 18:43:23.823420    4599 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1025 18:43:23.823489    4599 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1025 18:43:23.825361    4599 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1025 18:43:23.825375    4599 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1025 18:43:23.825419    4599 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1025 18:43:23.842593    4599 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1025 18:43:23.846977    4599 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1025 18:43:23.908077    4599 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1025 18:43:23.920416    4599 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1025 18:43:23.920438    4599 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1025 18:43:23.920501    4599 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1025 18:43:23.930990    4599 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1025 18:43:23.931134    4599 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1025 18:43:23.932928    4599 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1025 18:43:23.932939    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1025 18:43:23.939889    4599 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1025 18:43:23.939899    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1025 18:43:23.966857    4599 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1025 18:43:23.991521    4599 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1025 18:43:23.991690    4599 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1025 18:43:24.002005    4599 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1025 18:43:24.002034    4599 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 18:43:24.002094    4599 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1025 18:43:24.012839    4599 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1025 18:43:24.012981    4599 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1025 18:43:24.014411    4599 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1025 18:43:24.014424    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1025 18:43:24.058705    4599 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1025 18:43:24.058719    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1025 18:43:24.098085    4599 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W1025 18:43:24.154832    4599 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1025 18:43:24.154973    4599 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:43:24.170052    4599 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1025 18:43:24.170077    4599 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:43:24.170142    4599 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:43:24.185052    4599 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1025 18:43:24.185193    4599 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1025 18:43:24.186694    4599 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1025 18:43:24.186705    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1025 18:43:24.216027    4599 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1025 18:43:24.216041    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1025 18:43:24.455157    4599 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1025 18:43:24.455197    4599 cache_images.go:92] duration metric: took 1.301441875s to LoadCachedImages
	W1025 18:43:24.455240    4599 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1025 18:43:24.455245    4599 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1025 18:43:24.455300    4599 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-889000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 18:43:24.455373    4599 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 18:43:24.468545    4599 cni.go:84] Creating CNI manager for ""
	I1025 18:43:24.468557    4599 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:43:24.468565    4599 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1025 18:43:24.468573    4599 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-889000 NodeName:running-upgrade-889000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 18:43:24.468650    4599 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-889000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 18:43:24.468715    4599 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1025 18:43:24.472017    4599 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 18:43:24.472059    4599 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 18:43:24.474631    4599 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1025 18:43:24.479619    4599 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 18:43:24.484975    4599 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1025 18:43:24.490523    4599 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1025 18:43:24.491862    4599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:43:24.569629    4599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 18:43:24.574714    4599 certs.go:68] Setting up /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000 for IP: 10.0.2.15
	I1025 18:43:24.574723    4599 certs.go:194] generating shared ca certs ...
	I1025 18:43:24.574730    4599 certs.go:226] acquiring lock for ca certs: {Name:mk4d96eff7eec2b0b424f4d9808345f1ae37fa52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:43:24.574893    4599 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.key
	I1025 18:43:24.574928    4599 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/proxy-client-ca.key
	I1025 18:43:24.574935    4599 certs.go:256] generating profile certs ...
	I1025 18:43:24.574999    4599 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/client.key
	I1025 18:43:24.575014    4599 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/apiserver.key.b7d72942
	I1025 18:43:24.575024    4599 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/apiserver.crt.b7d72942 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1025 18:43:24.742232    4599 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/apiserver.crt.b7d72942 ...
	I1025 18:43:24.742244    4599 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/apiserver.crt.b7d72942: {Name:mk1eec84ed377a0bec5e1996e96474a372a102eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:43:24.742749    4599 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/apiserver.key.b7d72942 ...
	I1025 18:43:24.742763    4599 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/apiserver.key.b7d72942: {Name:mk79cddcb5e944f35c1ec583ef0d80fb2a69ed43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:43:24.742928    4599 certs.go:381] copying /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/apiserver.crt.b7d72942 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/apiserver.crt
	I1025 18:43:24.743062    4599 certs.go:385] copying /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/apiserver.key.b7d72942 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/apiserver.key
	I1025 18:43:24.743204    4599 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/proxy-client.key
	I1025 18:43:24.743360    4599 certs.go:484] found cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/1672.pem (1338 bytes)
	W1025 18:43:24.743387    4599 certs.go:480] ignoring /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/1672_empty.pem, impossibly tiny 0 bytes
	I1025 18:43:24.743392    4599 certs.go:484] found cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 18:43:24.743413    4599 certs.go:484] found cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem (1082 bytes)
	I1025 18:43:24.743433    4599 certs.go:484] found cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem (1123 bytes)
	I1025 18:43:24.743451    4599 certs.go:484] found cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/key.pem (1675 bytes)
	I1025 18:43:24.743494    4599 certs.go:484] found cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/files/etc/ssl/certs/16722.pem (1708 bytes)
	I1025 18:43:24.743848    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 18:43:24.751642    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 18:43:24.758831    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 18:43:24.766272    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 18:43:24.773793    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 18:43:24.780716    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 18:43:24.787374    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 18:43:24.794530    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 18:43:24.801566    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/files/etc/ssl/certs/16722.pem --> /usr/share/ca-certificates/16722.pem (1708 bytes)
	I1025 18:43:24.808461    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 18:43:24.815260    4599 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/1672.pem --> /usr/share/ca-certificates/1672.pem (1338 bytes)
	I1025 18:43:24.822146    4599 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 18:43:24.827078    4599 ssh_runner.go:195] Run: openssl version
	I1025 18:43:24.829061    4599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16722.pem && ln -fs /usr/share/ca-certificates/16722.pem /etc/ssl/certs/16722.pem"
	I1025 18:43:24.832169    4599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16722.pem
	I1025 18:43:24.833574    4599 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:50 /usr/share/ca-certificates/16722.pem
	I1025 18:43:24.833601    4599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16722.pem
	I1025 18:43:24.835505    4599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16722.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 18:43:24.838368    4599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 18:43:24.841786    4599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:43:24.843474    4599 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:43 /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:43:24.843504    4599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:43:24.845097    4599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 18:43:24.847869    4599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1672.pem && ln -fs /usr/share/ca-certificates/1672.pem /etc/ssl/certs/1672.pem"
	I1025 18:43:24.850770    4599 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672.pem
	I1025 18:43:24.852187    4599 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:50 /usr/share/ca-certificates/1672.pem
	I1025 18:43:24.852211    4599 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672.pem
	I1025 18:43:24.853879    4599 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1672.pem /etc/ssl/certs/51391683.0"
	I1025 18:43:24.856953    4599 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 18:43:24.858436    4599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 18:43:24.860198    4599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 18:43:24.861988    4599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 18:43:24.863844    4599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 18:43:24.865746    4599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 18:43:24.867535    4599 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 18:43:24.869253    4599 kubeadm.go:392] StartCluster: {Name:running-upgrade-889000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62322 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-889000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1025 18:43:24.869326    4599 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:43:24.879542    4599 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 18:43:24.882712    4599 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1025 18:43:24.882721    4599 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1025 18:43:24.882747    4599 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 18:43:24.885657    4599 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:43:24.885902    4599 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-889000" does not appear in /Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:43:24.885950    4599 kubeconfig.go:62] /Users/jenkins/minikube-integration/19868-1112/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-889000" cluster setting kubeconfig missing "running-upgrade-889000" context setting]
	I1025 18:43:24.886126    4599 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/kubeconfig: {Name:mk88d1ac601cc80b64027f8557b82969027e8e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:43:24.886830    4599 kapi.go:59] client config for running-upgrade-889000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/client.key", CAFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1065ee680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:43:24.887171    4599 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 18:43:24.889953    4599 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-889000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1025 18:43:24.889959    4599 kubeadm.go:1160] stopping kube-system containers ...
	I1025 18:43:24.890007    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:43:24.901123    4599 docker.go:483] Stopping containers: [03760555268f ed4c591382fd 780c0da270ff 34a4e0843139 bda26047fd72 755f24743259 35f00200f8b4 3fbb4d7c4f18 0bcd752f5fd8 f7e71665db93 b9f81912ee14 8eb4801b6c0c]
	I1025 18:43:24.901196    4599 ssh_runner.go:195] Run: docker stop 03760555268f ed4c591382fd 780c0da270ff 34a4e0843139 bda26047fd72 755f24743259 35f00200f8b4 3fbb4d7c4f18 0bcd752f5fd8 f7e71665db93 b9f81912ee14 8eb4801b6c0c
	I1025 18:43:24.912438    4599 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 18:43:25.015757    4599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:43:25.020320    4599 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Oct 26 01:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Oct 26 01:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct 26 01:43 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Oct 26 01:42 /etc/kubernetes/scheduler.conf
	
	I1025 18:43:25.020372    4599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/admin.conf
	I1025 18:43:25.024005    4599 kubeadm.go:163] "https://control-plane.minikube.internal:62322" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:43:25.024050    4599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 18:43:25.027842    4599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/kubelet.conf
	I1025 18:43:25.031318    4599 kubeadm.go:163] "https://control-plane.minikube.internal:62322" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:43:25.031349    4599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 18:43:25.034895    4599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/controller-manager.conf
	I1025 18:43:25.037718    4599 kubeadm.go:163] "https://control-plane.minikube.internal:62322" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:43:25.037749    4599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 18:43:25.040406    4599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/scheduler.conf
	I1025 18:43:25.043319    4599 kubeadm.go:163] "https://control-plane.minikube.internal:62322" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:43:25.043347    4599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 18:43:25.046017    4599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:43:25.048874    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:43:25.069523    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:43:25.674582    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:43:25.871648    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:43:25.893668    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:43:25.917586    4599 api_server.go:52] waiting for apiserver process to appear ...
	I1025 18:43:25.917673    4599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:26.419922    4599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:26.919739    4599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:43:26.924812    4599 api_server.go:72] duration metric: took 1.007247167s to wait for apiserver process to appear ...
	I1025 18:43:26.924824    4599 api_server.go:88] waiting for apiserver healthz status ...
	I1025 18:43:26.924846    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:43:31.926924    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:43:31.927032    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:43:36.927807    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:43:36.927899    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:43:41.928588    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:43:41.928642    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:43:46.929482    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:43:46.929565    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:43:51.930972    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:43:51.931074    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:43:56.932832    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:43:56.932924    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:44:01.934032    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:44:01.934162    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:44:06.936599    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:44:06.936693    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:44:11.939265    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:44:11.939320    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:44:16.941658    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:44:16.941756    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:44:21.944438    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:44:21.944523    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:44:26.947011    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:44:26.947253    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:44:26.974009    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:44:26.974159    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:44:26.989474    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:44:26.989559    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:44:27.002696    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:44:27.002777    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:44:27.013071    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:44:27.013155    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:44:27.023304    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:44:27.023404    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:44:27.033596    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:44:27.033671    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:44:27.050769    4599 logs.go:282] 0 containers: []
	W1025 18:44:27.050791    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:44:27.050854    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:44:27.060873    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:44:27.060890    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:44:27.060894    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:44:27.074874    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:44:27.074886    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:44:27.086247    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:44:27.086261    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:44:27.097776    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:44:27.097790    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:44:27.102205    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:44:27.102214    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:44:27.177474    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:44:27.177488    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:44:27.190615    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:44:27.190630    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:44:27.201812    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:44:27.201831    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:44:27.218694    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:44:27.218706    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:44:27.233738    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:44:27.233749    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:44:27.245450    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:44:27.245463    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:44:27.262117    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:44:27.262132    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:44:27.278967    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:44:27.278981    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:44:27.317573    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:44:27.317583    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:44:27.336060    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:44:27.336070    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:44:27.351744    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:44:27.351754    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:44:27.362483    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:44:27.362498    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:44:29.887103    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:44:34.887469    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:44:34.888023    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:44:34.927138    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:44:34.927294    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:44:34.952397    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:44:34.952524    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:44:34.966825    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:44:34.966910    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:44:34.978509    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:44:34.978587    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:44:34.993958    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:44:34.994036    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:44:35.005140    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:44:35.005216    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:44:35.015174    4599 logs.go:282] 0 containers: []
	W1025 18:44:35.015188    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:44:35.015259    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:44:35.025802    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:44:35.025820    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:44:35.025827    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:44:35.060805    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:44:35.060819    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:44:35.074186    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:44:35.074196    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:44:35.085761    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:44:35.085775    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:44:35.098604    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:44:35.098613    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:44:35.137248    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:44:35.137256    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:44:35.150990    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:44:35.151001    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:44:35.163580    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:44:35.163594    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:44:35.178817    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:44:35.178830    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:44:35.190439    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:44:35.190450    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:44:35.214583    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:44:35.214590    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:44:35.218597    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:44:35.218605    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:44:35.230229    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:44:35.230241    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:44:35.242037    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:44:35.242048    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:44:35.253993    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:44:35.254006    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:44:35.270858    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:44:35.270869    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:44:35.282232    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:44:35.282244    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:44:37.798839    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:44:42.801192    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:44:42.802170    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:44:42.844507    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:44:42.844668    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:44:42.866515    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:44:42.866621    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:44:42.880149    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:44:42.880236    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:44:42.892319    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:44:42.892400    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:44:42.902992    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:44:42.903062    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:44:42.914202    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:44:42.914296    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:44:42.929208    4599 logs.go:282] 0 containers: []
	W1025 18:44:42.929220    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:44:42.929288    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:44:42.939567    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:44:42.939584    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:44:42.939588    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:44:42.955886    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:44:42.955897    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:44:42.967507    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:44:42.967519    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:44:42.983095    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:44:42.983106    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:44:43.002315    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:44:43.002326    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:44:43.022111    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:44:43.022125    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:44:43.033752    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:44:43.033765    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:44:43.057943    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:44:43.057952    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:44:43.093422    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:44:43.093430    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:44:43.097665    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:44:43.097671    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:44:43.111559    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:44:43.111568    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:44:43.147656    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:44:43.147668    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:44:43.161971    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:44:43.161983    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:44:43.173007    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:44:43.173019    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:44:43.184333    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:44:43.184346    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:44:43.196199    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:44:43.196210    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:44:43.216200    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:44:43.216211    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:44:45.729531    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:44:50.732369    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:44:50.732991    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:44:50.771115    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:44:50.771273    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:44:50.791625    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:44:50.791734    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:44:50.806339    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:44:50.806420    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:44:50.818590    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:44:50.818670    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:44:50.829567    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:44:50.829640    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:44:50.839914    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:44:50.840012    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:44:50.849965    4599 logs.go:282] 0 containers: []
	W1025 18:44:50.849976    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:44:50.850040    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:44:50.860776    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:44:50.860793    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:44:50.860801    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:44:50.894992    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:44:50.895007    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:44:50.910136    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:44:50.910150    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:44:50.921478    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:44:50.921490    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:44:50.933148    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:44:50.933161    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:44:50.970222    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:44:50.970236    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:44:50.983898    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:44:50.983911    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:44:50.996239    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:44:50.996250    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:44:51.007695    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:44:51.007705    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:44:51.019324    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:44:51.019336    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:44:51.031045    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:44:51.031057    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:44:51.045395    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:44:51.045405    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:44:51.049719    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:44:51.049727    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:44:51.065011    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:44:51.065027    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:44:51.079536    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:44:51.079548    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:44:51.096503    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:44:51.096512    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:44:51.135572    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:44:51.135586    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:44:53.651331    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:44:58.653980    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:44:58.654526    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:44:58.693418    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:44:58.693574    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:44:58.716127    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:44:58.716255    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:44:58.731797    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:44:58.731882    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:44:58.744748    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:44:58.744830    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:44:58.755984    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:44:58.756063    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:44:58.766995    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:44:58.767061    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:44:58.777730    4599 logs.go:282] 0 containers: []
	W1025 18:44:58.777744    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:44:58.777810    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:44:58.792725    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:44:58.792749    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:44:58.792755    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:44:58.826529    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:44:58.826539    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:44:58.844423    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:44:58.844433    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:44:58.848802    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:44:58.848809    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:44:58.863728    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:44:58.863740    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:44:58.875445    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:44:58.875457    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:44:58.889101    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:44:58.889113    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:44:58.906741    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:44:58.906754    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:44:58.920699    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:44:58.920709    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:44:58.935155    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:44:58.935166    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:44:58.946666    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:44:58.946677    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:44:58.959471    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:44:58.959485    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:44:58.996796    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:44:58.996806    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:44:59.012269    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:44:59.012279    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:44:59.023993    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:44:59.024004    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:44:59.042026    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:44:59.042035    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:44:59.053524    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:44:59.053535    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:45:01.580952    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:45:06.583799    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:45:06.584276    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:45:06.618162    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:45:06.618319    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:45:06.638959    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:45:06.639079    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:45:06.653510    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:45:06.653596    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:45:06.666434    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:45:06.666513    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:45:06.677020    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:45:06.677093    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:45:06.687498    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:45:06.687581    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:45:06.698214    4599 logs.go:282] 0 containers: []
	W1025 18:45:06.698225    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:45:06.698295    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:45:06.708361    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:45:06.708380    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:45:06.708385    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:45:06.729686    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:45:06.729697    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:45:06.755236    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:45:06.755246    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:45:06.759911    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:45:06.759919    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:45:06.773817    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:45:06.773828    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:45:06.793031    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:45:06.793042    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:45:06.804573    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:45:06.804584    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:45:06.821015    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:45:06.821028    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:45:06.832545    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:45:06.832559    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:45:06.849722    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:45:06.849732    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:45:06.861290    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:45:06.861302    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:45:06.897160    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:45:06.897167    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:45:06.930836    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:45:06.930848    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:45:06.942101    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:45:06.942111    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:45:06.953713    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:45:06.953725    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:45:06.967842    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:45:06.967855    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:45:06.981908    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:45:06.981919    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:45:09.495217    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:45:14.497961    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:45:14.498395    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:45:14.532476    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:45:14.532622    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:45:14.553171    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:45:14.553281    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:45:14.567334    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:45:14.567419    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:45:14.579999    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:45:14.580090    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:45:14.590674    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:45:14.590749    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:45:14.601405    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:45:14.601480    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:45:14.611195    4599 logs.go:282] 0 containers: []
	W1025 18:45:14.611210    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:45:14.611266    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:45:14.622986    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:45:14.623004    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:45:14.623009    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:45:14.639415    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:45:14.639427    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:45:14.650959    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:45:14.650971    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:45:14.665661    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:45:14.665673    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:45:14.677342    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:45:14.677355    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:45:14.693994    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:45:14.694003    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:45:14.708066    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:45:14.708079    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:45:14.743941    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:45:14.743954    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:45:14.757791    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:45:14.757805    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:45:14.769283    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:45:14.769295    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:45:14.805896    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:45:14.805908    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:45:14.817545    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:45:14.817559    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:45:14.829089    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:45:14.829103    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:45:14.840632    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:45:14.840644    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:45:14.854836    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:45:14.854848    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:45:14.879394    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:45:14.879401    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:45:14.895097    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:45:14.895108    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:45:17.402106    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:45:22.404464    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:45:22.404847    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:45:22.445935    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:45:22.446086    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:45:22.467782    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:45:22.467896    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:45:22.483062    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:45:22.483148    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:45:22.500001    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:45:22.500082    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:45:22.510593    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:45:22.510673    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:45:22.529293    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:45:22.529372    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:45:22.539971    4599 logs.go:282] 0 containers: []
	W1025 18:45:22.539985    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:45:22.540051    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:45:22.550341    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:45:22.550359    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:45:22.550365    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:45:22.555327    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:45:22.555335    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:45:22.570658    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:45:22.570670    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:45:22.589827    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:45:22.589838    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:45:22.602137    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:45:22.602148    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:45:22.640578    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:45:22.640592    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:45:22.654613    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:45:22.654624    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:45:22.669019    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:45:22.669029    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:45:22.680513    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:45:22.680524    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:45:22.692005    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:45:22.692016    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:45:22.703537    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:45:22.703549    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:45:22.714240    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:45:22.714250    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:45:22.748033    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:45:22.748044    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:45:22.772258    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:45:22.772268    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:45:22.784680    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:45:22.784691    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:45:22.796342    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:45:22.796358    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:45:22.807840    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:45:22.807852    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:45:25.330226    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:45:30.331238    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:45:30.331483    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:45:30.357568    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:45:30.357668    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:45:30.373191    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:45:30.373274    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:45:30.387815    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:45:30.387895    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:45:30.398886    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:45:30.398953    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:45:30.409997    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:45:30.410079    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:45:30.429880    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:45:30.429946    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:45:30.445363    4599 logs.go:282] 0 containers: []
	W1025 18:45:30.445372    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:45:30.445436    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:45:30.456680    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:45:30.456697    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:45:30.456703    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:45:30.471302    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:45:30.471314    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:45:30.476523    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:45:30.476530    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:45:30.512119    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:45:30.512131    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:45:30.523633    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:45:30.523644    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:45:30.535124    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:45:30.535135    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:45:30.547504    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:45:30.547515    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:45:30.562902    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:45:30.562915    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:45:30.588618    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:45:30.588629    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:45:30.626968    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:45:30.626979    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:45:30.640042    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:45:30.640054    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:45:30.651727    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:45:30.651738    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:45:30.667509    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:45:30.667526    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:45:30.679479    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:45:30.679491    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:45:30.696970    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:45:30.696982    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:45:30.712594    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:45:30.712609    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:45:30.726291    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:45:30.726305    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:45:33.249040    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:45:38.251690    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:45:38.251888    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:45:38.264208    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:45:38.264296    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:45:38.275317    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:45:38.275416    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:45:38.286535    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:45:38.286613    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:45:38.297494    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:45:38.297573    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:45:38.307814    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:45:38.307891    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:45:38.322461    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:45:38.322538    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:45:38.332366    4599 logs.go:282] 0 containers: []
	W1025 18:45:38.332378    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:45:38.332441    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:45:38.343043    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:45:38.343060    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:45:38.343065    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:45:38.358235    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:45:38.358248    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:45:38.363293    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:45:38.363300    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:45:38.398079    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:45:38.398090    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:45:38.410316    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:45:38.410329    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:45:38.422355    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:45:38.422367    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:45:38.448833    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:45:38.448845    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:45:38.461020    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:45:38.461032    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:45:38.473817    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:45:38.473831    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:45:38.485509    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:45:38.485520    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:45:38.503441    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:45:38.503450    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:45:38.516389    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:45:38.516402    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:45:38.556163    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:45:38.556173    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:45:38.570578    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:45:38.570589    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:45:38.582817    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:45:38.582832    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:45:38.594631    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:45:38.594644    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:45:38.609402    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:45:38.609415    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:45:41.125878    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:45:46.128068    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:45:46.128229    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:45:46.140711    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:45:46.140815    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:45:46.152530    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:45:46.152612    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:45:46.163122    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:45:46.163205    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:45:46.173958    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:45:46.174037    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:45:46.184178    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:45:46.184251    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:45:46.194621    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:45:46.194718    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:45:46.204617    4599 logs.go:282] 0 containers: []
	W1025 18:45:46.204628    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:45:46.204708    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:45:46.215627    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:45:46.215645    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:45:46.215651    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:45:46.228135    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:45:46.228148    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:45:46.240022    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:45:46.240037    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:45:46.258880    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:45:46.258891    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:45:46.294621    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:45:46.294635    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:45:46.312838    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:45:46.312849    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:45:46.326196    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:45:46.326207    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:45:46.340890    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:45:46.340906    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:45:46.353573    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:45:46.353584    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:45:46.394303    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:45:46.394314    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:45:46.399118    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:45:46.399124    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:45:46.410681    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:45:46.410692    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:45:46.436824    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:45:46.436832    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:45:46.459209    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:45:46.459218    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:45:46.474090    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:45:46.474104    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:45:46.490196    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:45:46.490208    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:45:46.505363    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:45:46.505374    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:45:49.018017    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:45:54.018295    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:45:54.018443    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:45:54.030763    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:45:54.030863    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:45:54.042361    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:45:54.042445    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:45:54.054294    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:45:54.054392    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:45:54.066517    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:45:54.066597    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:45:54.078967    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:45:54.079048    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:45:54.090618    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:45:54.090721    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:45:54.102762    4599 logs.go:282] 0 containers: []
	W1025 18:45:54.102776    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:45:54.102855    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:45:54.117722    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:45:54.117739    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:45:54.117746    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:45:54.131935    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:45:54.131951    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:45:54.154882    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:45:54.154899    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:45:54.168626    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:45:54.168640    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:45:54.173089    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:45:54.173103    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:45:54.213774    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:45:54.213787    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:45:54.226769    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:45:54.226782    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:45:54.242894    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:45:54.242909    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:45:54.256154    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:45:54.256166    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:45:54.297085    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:45:54.297116    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:45:54.312310    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:45:54.312326    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:45:54.326170    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:45:54.326183    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:45:54.343341    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:45:54.343355    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:45:54.358467    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:45:54.358480    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:45:54.385360    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:45:54.385377    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:45:54.400582    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:45:54.400594    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:45:54.416677    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:45:54.416690    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:45:56.935257    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:46:01.937461    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:46:01.937757    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:46:01.959510    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:46:01.959637    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:46:01.974556    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:46:01.974642    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:46:01.986478    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:46:01.986553    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:46:01.997302    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:46:01.997385    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:46:02.007723    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:46:02.007791    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:46:02.017941    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:46:02.018011    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:46:02.028063    4599 logs.go:282] 0 containers: []
	W1025 18:46:02.028080    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:46:02.028137    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:46:02.038516    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:46:02.038537    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:46:02.038542    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:46:02.051146    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:46:02.051156    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:46:02.062649    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:46:02.062661    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:46:02.074557    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:46:02.074568    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:46:02.086062    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:46:02.086073    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:46:02.098273    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:46:02.098283    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:46:02.136015    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:46:02.136029    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:46:02.149849    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:46:02.149858    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:46:02.164946    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:46:02.164960    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:46:02.176762    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:46:02.176773    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:46:02.193542    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:46:02.193553    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:46:02.198432    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:46:02.198439    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:46:02.235111    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:46:02.235126    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:46:02.249234    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:46:02.249244    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:46:02.274880    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:46:02.274891    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:46:02.287956    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:46:02.287970    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:46:02.303910    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:46:02.303923    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:46:04.817800    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:46:09.820291    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:46:09.820404    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:46:09.831178    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:46:09.831259    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:46:09.841914    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:46:09.841979    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:46:09.852848    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:46:09.852925    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:46:09.864990    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:46:09.865063    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:46:09.876092    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:46:09.876183    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:46:09.888299    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:46:09.888387    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:46:09.900622    4599 logs.go:282] 0 containers: []
	W1025 18:46:09.900635    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:46:09.900701    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:46:09.916279    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:46:09.916298    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:46:09.916303    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:46:09.953941    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:46:09.953953    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:46:09.968324    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:46:09.968336    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:46:09.988547    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:46:09.988558    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:46:10.007448    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:46:10.007459    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:46:10.022047    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:46:10.022058    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:46:10.033397    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:46:10.033408    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:46:10.044290    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:46:10.044301    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:46:10.058036    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:46:10.058048    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:46:10.097969    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:46:10.097977    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:46:10.109513    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:46:10.109523    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:46:10.121860    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:46:10.121872    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:46:10.139099    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:46:10.139109    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:46:10.143724    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:46:10.143733    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:46:10.155036    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:46:10.155048    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:46:10.167697    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:46:10.167707    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:46:10.183887    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:46:10.183897    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:46:12.710951    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:46:17.713232    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:46:17.713817    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:46:17.754423    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:46:17.754589    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:46:17.775954    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:46:17.776071    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:46:17.795371    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:46:17.795470    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:46:17.809897    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:46:17.809970    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:46:17.821910    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:46:17.822000    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:46:17.833220    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:46:17.833302    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:46:17.844515    4599 logs.go:282] 0 containers: []
	W1025 18:46:17.844527    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:46:17.844594    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:46:17.859882    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:46:17.859899    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:46:17.859906    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:46:17.898357    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:46:17.898370    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:46:17.932933    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:46:17.932948    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:46:17.950752    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:46:17.950764    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:46:17.964108    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:46:17.964119    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:46:17.985599    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:46:17.985613    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:46:17.997714    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:46:17.997725    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:46:18.013527    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:46:18.013539    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:46:18.032552    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:46:18.032561    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:46:18.049779    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:46:18.049790    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:46:18.062004    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:46:18.062018    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:46:18.066956    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:46:18.066963    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:46:18.081670    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:46:18.081683    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:46:18.093213    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:46:18.093225    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:46:18.105271    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:46:18.105284    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:46:18.116493    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:46:18.116503    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:46:18.130544    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:46:18.130554    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:46:20.657709    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:46:25.660068    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:46:25.660294    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:46:25.677700    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:46:25.677778    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:46:25.689714    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:46:25.689795    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:46:25.700274    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:46:25.700348    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:46:25.710814    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:46:25.710899    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:46:25.728868    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:46:25.728946    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:46:25.739116    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:46:25.739189    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:46:25.748914    4599 logs.go:282] 0 containers: []
	W1025 18:46:25.748927    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:46:25.749000    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:46:25.763387    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:46:25.763407    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:46:25.763412    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:46:25.775122    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:46:25.775132    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:46:25.799943    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:46:25.799959    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:46:25.822813    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:46:25.822828    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:46:25.835191    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:46:25.835204    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:46:25.873403    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:46:25.873417    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:46:25.916173    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:46:25.916189    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:46:25.929853    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:46:25.929867    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:46:25.947613    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:46:25.947623    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:46:25.965579    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:46:25.965594    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:46:25.976933    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:46:25.976946    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:46:25.988145    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:46:25.988155    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:46:25.999439    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:46:25.999450    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:46:26.022071    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:46:26.022082    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:46:26.033486    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:46:26.033499    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:46:26.038424    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:46:26.038430    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:46:26.051012    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:46:26.051022    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:46:28.568700    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:46:33.571422    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:46:33.571667    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:46:33.583879    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:46:33.583968    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:46:33.595074    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:46:33.595155    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:46:33.605778    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:46:33.605842    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:46:33.618767    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:46:33.618854    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:46:33.630131    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:46:33.630216    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:46:33.641466    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:46:33.641536    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:46:33.651731    4599 logs.go:282] 0 containers: []
	W1025 18:46:33.651741    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:46:33.651799    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:46:33.662030    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:46:33.662047    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:46:33.662051    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:46:33.701571    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:46:33.701581    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:46:33.706343    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:46:33.706351    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:46:33.742119    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:46:33.742130    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:46:33.756423    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:46:33.756433    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:46:33.771880    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:46:33.771889    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:46:33.790297    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:46:33.790308    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:46:33.802169    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:46:33.802182    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:46:33.813471    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:46:33.813483    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:46:33.825114    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:46:33.825126    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:46:33.837168    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:46:33.837180    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:46:33.854736    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:46:33.854746    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:46:33.867567    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:46:33.867578    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:46:33.879154    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:46:33.879166    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:46:33.904288    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:46:33.904300    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:46:33.919143    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:46:33.919152    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:46:33.935295    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:46:33.935307    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:46:36.450249    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:46:41.451280    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:46:41.451810    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:46:41.491205    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:46:41.491372    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:46:41.508720    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:46:41.508835    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:46:41.525317    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:46:41.525415    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:46:41.537427    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:46:41.537521    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:46:41.550472    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:46:41.550543    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:46:41.561161    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:46:41.561255    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:46:41.571135    4599 logs.go:282] 0 containers: []
	W1025 18:46:41.571149    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:46:41.571216    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:46:41.583696    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:46:41.583715    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:46:41.583721    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:46:41.624426    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:46:41.624437    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:46:41.636048    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:46:41.636059    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:46:41.647169    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:46:41.647179    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:46:41.658867    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:46:41.658879    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:46:41.663087    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:46:41.663093    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:46:41.675741    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:46:41.675751    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:46:41.689378    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:46:41.689390    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:46:41.700808    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:46:41.700818    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:46:41.720039    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:46:41.720049    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:46:41.743724    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:46:41.743736    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:46:41.781701    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:46:41.781708    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:46:41.796617    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:46:41.796627    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:46:41.807598    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:46:41.807608    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:46:41.820409    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:46:41.820422    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:46:41.834014    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:46:41.834027    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:46:41.848459    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:46:41.848470    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:46:44.362038    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:46:49.362352    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:46:49.362461    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:46:49.374364    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:46:49.374452    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:46:49.389135    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:46:49.389228    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:46:49.401588    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:46:49.401678    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:46:49.414365    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:46:49.414457    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:46:49.426283    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:46:49.426362    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:46:49.438330    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:46:49.438411    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:46:49.450017    4599 logs.go:282] 0 containers: []
	W1025 18:46:49.450028    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:46:49.450098    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:46:49.462183    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:46:49.462205    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:46:49.462211    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:46:49.501982    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:46:49.501999    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:46:49.540691    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:46:49.540704    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:46:49.556710    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:46:49.556726    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:46:49.569939    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:46:49.569951    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:46:49.589142    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:46:49.589157    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:46:49.604543    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:46:49.604557    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:46:49.621217    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:46:49.621229    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:46:49.635577    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:46:49.635590    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:46:49.649965    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:46:49.649978    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:46:49.663051    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:46:49.663065    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:46:49.682652    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:46:49.682667    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:46:49.696400    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:46:49.696413    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:46:49.709251    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:46:49.709263    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:46:49.734655    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:46:49.734670    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:46:49.740068    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:46:49.740079    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:46:49.755727    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:46:49.755740    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:46:52.270429    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:46:57.272725    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:46:57.273270    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:46:57.314155    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:46:57.314317    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:46:57.336815    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:46:57.336947    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:46:57.352259    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:46:57.352352    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:46:57.365427    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:46:57.365510    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:46:57.376647    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:46:57.376730    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:46:57.387518    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:46:57.387594    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:46:57.398039    4599 logs.go:282] 0 containers: []
	W1025 18:46:57.398052    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:46:57.398109    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:46:57.411995    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:46:57.412015    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:46:57.412021    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:46:57.426348    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:46:57.426361    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:46:57.440423    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:46:57.440433    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:46:57.454755    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:46:57.454767    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:46:57.466624    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:46:57.466633    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:46:57.505487    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:46:57.505497    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:46:57.509884    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:46:57.509892    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:46:57.521805    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:46:57.521817    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:46:57.535099    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:46:57.535112    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:46:57.547164    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:46:57.547174    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:46:57.558746    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:46:57.558758    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:46:57.570016    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:46:57.570027    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:46:57.593187    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:46:57.593196    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:46:57.628591    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:46:57.628603    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:46:57.643792    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:46:57.643806    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:46:57.661536    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:46:57.661545    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:46:57.673850    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:46:57.673863    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:47:00.187230    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:05.295137    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:05.295313    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:47:05.308966    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:47:05.309058    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:47:05.320873    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:47:05.320952    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:47:05.331429    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:47:05.331511    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:47:05.342223    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:47:05.342305    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:47:05.352795    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:47:05.352868    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:47:05.363238    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:47:05.363318    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:47:05.373518    4599 logs.go:282] 0 containers: []
	W1025 18:47:05.373531    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:47:05.373596    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:47:05.384345    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:47:05.384364    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:47:05.384369    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:47:05.399075    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:47:05.399085    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:47:05.410329    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:47:05.410338    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:47:05.445880    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:47:05.445894    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:47:05.458610    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:47:05.458624    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:47:05.473618    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:47:05.473631    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:47:05.485191    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:47:05.485205    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:47:05.496355    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:47:05.496367    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:47:05.520048    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:47:05.520058    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:47:05.532003    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:47:05.532013    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:47:05.546385    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:47:05.546398    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:47:05.558031    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:47:05.558045    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:47:05.569122    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:47:05.569135    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:47:05.580982    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:47:05.580992    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:47:05.597700    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:47:05.597713    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:47:05.633923    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:47:05.633933    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:47:05.637942    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:47:05.637950    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:47:08.153594    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:13.154169    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:13.154521    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:47:13.190253    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:47:13.190413    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:47:13.211025    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:47:13.211135    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:47:13.225731    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:47:13.225811    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:47:13.238479    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:47:13.238550    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:47:13.249171    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:47:13.249251    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:47:13.259480    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:47:13.259562    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:47:13.269130    4599 logs.go:282] 0 containers: []
	W1025 18:47:13.269140    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:47:13.269196    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:47:13.280005    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:47:13.280023    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:47:13.280028    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:47:13.294717    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:47:13.294731    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:47:13.305880    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:47:13.305895    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:47:13.319640    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:47:13.319650    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:47:13.335163    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:47:13.335177    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:47:13.373386    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:47:13.373397    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:47:13.409323    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:47:13.409338    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:47:13.420833    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:47:13.420844    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:47:13.444858    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:47:13.444868    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:47:13.459471    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:47:13.459482    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:47:13.471130    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:47:13.471143    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:47:13.482759    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:47:13.482772    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:47:13.495161    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:47:13.495173    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:47:13.499299    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:47:13.499308    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:47:13.511765    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:47:13.511778    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:47:13.526347    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:47:13.526360    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:47:13.541021    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:47:13.541032    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:47:16.060724    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:21.063508    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:21.063852    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:47:21.092207    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:47:21.092359    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:47:21.110508    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:47:21.110611    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:47:21.124112    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:47:21.124200    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:47:21.138061    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:47:21.138145    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:47:21.153682    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:47:21.153762    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:47:21.164232    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:47:21.164300    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:47:21.174652    4599 logs.go:282] 0 containers: []
	W1025 18:47:21.174663    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:47:21.174747    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:47:21.203592    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:47:21.203610    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:47:21.203616    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:47:21.217297    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:47:21.217307    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:47:21.239412    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:47:21.239421    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:47:21.253408    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:47:21.253420    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:47:21.266234    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:47:21.266247    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:47:21.278260    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:47:21.278271    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:47:21.296657    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:47:21.296671    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:47:21.334776    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:47:21.334788    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:47:21.349706    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:47:21.349720    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:47:21.361173    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:47:21.361188    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:47:21.377367    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:47:21.377383    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:47:21.382169    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:47:21.382178    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:47:21.420893    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:47:21.420907    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:47:21.435233    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:47:21.435247    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:47:21.446180    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:47:21.446189    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:47:21.457119    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:47:21.457133    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:47:21.471958    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:47:21.471971    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:47:23.984702    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:28.987656    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:28.987844    4599 kubeadm.go:597] duration metric: took 4m4.0017965s to restartPrimaryControlPlane
	W1025 18:47:28.988007    4599 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1025 18:47:28.988077    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1025 18:47:30.000305    4599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.012188459s)
	I1025 18:47:30.000538    4599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:47:30.005558    4599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:47:30.008528    4599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:47:30.011279    4599 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:47:30.011284    4599 kubeadm.go:157] found existing configuration files:
	
	I1025 18:47:30.011311    4599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/admin.conf
	I1025 18:47:30.013754    4599 kubeadm.go:163] "https://control-plane.minikube.internal:62322" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 18:47:30.013782    4599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 18:47:30.016896    4599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/kubelet.conf
	I1025 18:47:30.019822    4599 kubeadm.go:163] "https://control-plane.minikube.internal:62322" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 18:47:30.019857    4599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 18:47:30.022369    4599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/controller-manager.conf
	I1025 18:47:30.025357    4599 kubeadm.go:163] "https://control-plane.minikube.internal:62322" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 18:47:30.025384    4599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 18:47:30.028807    4599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/scheduler.conf
	I1025 18:47:30.031663    4599 kubeadm.go:163] "https://control-plane.minikube.internal:62322" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 18:47:30.031691    4599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 18:47:30.034251    4599 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 18:47:30.050272    4599 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1025 18:47:30.050299    4599 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 18:47:30.107050    4599 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 18:47:30.107113    4599 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 18:47:30.107179    4599 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 18:47:30.157221    4599 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 18:47:30.161448    4599 out.go:235]   - Generating certificates and keys ...
	I1025 18:47:30.161481    4599 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 18:47:30.161517    4599 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 18:47:30.161573    4599 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 18:47:30.161617    4599 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1025 18:47:30.161660    4599 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 18:47:30.161687    4599 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1025 18:47:30.161730    4599 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1025 18:47:30.161792    4599 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1025 18:47:30.161853    4599 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 18:47:30.161896    4599 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 18:47:30.161918    4599 kubeadm.go:310] [certs] Using the existing "sa" key
	I1025 18:47:30.161945    4599 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 18:47:30.201637    4599 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 18:47:30.424184    4599 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 18:47:30.473133    4599 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 18:47:30.552461    4599 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 18:47:30.582080    4599 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:47:30.582452    4599 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:47:30.582480    4599 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 18:47:30.670508    4599 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 18:47:30.674756    4599 out.go:235]   - Booting up control plane ...
	I1025 18:47:30.674801    4599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 18:47:30.674843    4599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 18:47:30.674882    4599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 18:47:30.674930    4599 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 18:47:30.675015    4599 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 18:47:35.173699    4599 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502879 seconds
	I1025 18:47:35.173764    4599 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 18:47:35.177594    4599 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 18:47:35.698691    4599 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 18:47:35.699137    4599 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-889000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 18:47:36.203285    4599 kubeadm.go:310] [bootstrap-token] Using token: a0knbh.qb4bjtcmvw8hg9x6
	I1025 18:47:36.209485    4599 out.go:235]   - Configuring RBAC rules ...
	I1025 18:47:36.209555    4599 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 18:47:36.209606    4599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 18:47:36.216292    4599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 18:47:36.217204    4599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 18:47:36.218159    4599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 18:47:36.218942    4599 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 18:47:36.222048    4599 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 18:47:36.401751    4599 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1025 18:47:36.607445    4599 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1025 18:47:36.607946    4599 kubeadm.go:310] 
	I1025 18:47:36.607975    4599 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1025 18:47:36.607983    4599 kubeadm.go:310] 
	I1025 18:47:36.608026    4599 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1025 18:47:36.608036    4599 kubeadm.go:310] 
	I1025 18:47:36.608052    4599 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1025 18:47:36.608088    4599 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 18:47:36.608117    4599 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 18:47:36.608121    4599 kubeadm.go:310] 
	I1025 18:47:36.608152    4599 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1025 18:47:36.608155    4599 kubeadm.go:310] 
	I1025 18:47:36.608182    4599 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 18:47:36.608185    4599 kubeadm.go:310] 
	I1025 18:47:36.608213    4599 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1025 18:47:36.608256    4599 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 18:47:36.608299    4599 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 18:47:36.608303    4599 kubeadm.go:310] 
	I1025 18:47:36.608354    4599 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 18:47:36.608388    4599 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1025 18:47:36.608393    4599 kubeadm.go:310] 
	I1025 18:47:36.608429    4599 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a0knbh.qb4bjtcmvw8hg9x6 \
	I1025 18:47:36.608488    4599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b9d1b51b46aa29bee5add6dcd2f2839d068831832311340de43d2611a1555cef \
	I1025 18:47:36.608502    4599 kubeadm.go:310] 	--control-plane 
	I1025 18:47:36.608509    4599 kubeadm.go:310] 
	I1025 18:47:36.608574    4599 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1025 18:47:36.608579    4599 kubeadm.go:310] 
	I1025 18:47:36.608627    4599 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a0knbh.qb4bjtcmvw8hg9x6 \
	I1025 18:47:36.608694    4599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b9d1b51b46aa29bee5add6dcd2f2839d068831832311340de43d2611a1555cef 
	I1025 18:47:36.608759    4599 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:47:36.608766    4599 cni.go:84] Creating CNI manager for ""
	I1025 18:47:36.608773    4599 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:47:36.611368    4599 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 18:47:36.614497    4599 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 18:47:36.617575    4599 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 18:47:36.623113    4599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 18:47:36.623177    4599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:36.623194    4599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-889000 minikube.k8s.io/updated_at=2024_10_25T18_47_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=running-upgrade-889000 minikube.k8s.io/primary=true
	I1025 18:47:36.655576    4599 kubeadm.go:1113] duration metric: took 32.4525ms to wait for elevateKubeSystemPrivileges
	I1025 18:47:36.655609    4599 ops.go:34] apiserver oom_adj: -16
	I1025 18:47:36.666367    4599 kubeadm.go:394] duration metric: took 4m11.693619375s to StartCluster
	I1025 18:47:36.666385    4599 settings.go:142] acquiring lock: {Name:mk3ff32802ddfc6c1e0425afbf853ac78c436759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:47:36.666510    4599 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:47:36.666982    4599 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/kubeconfig: {Name:mk88d1ac601cc80b64027f8557b82969027e8e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:47:36.667188    4599 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:47:36.667221    4599 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 18:47:36.667257    4599 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-889000"
	I1025 18:47:36.667265    4599 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-889000"
	W1025 18:47:36.667271    4599 addons.go:243] addon storage-provisioner should already be in state true
	I1025 18:47:36.667284    4599 host.go:66] Checking if "running-upgrade-889000" exists ...
	I1025 18:47:36.667304    4599 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-889000"
	I1025 18:47:36.667314    4599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-889000"
	I1025 18:47:36.667389    4599 config.go:182] Loaded profile config "running-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:47:36.668251    4599 kapi.go:59] client config for running-upgrade-889000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/client.key", CAFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1065ee680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:47:36.668630    4599 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-889000"
	W1025 18:47:36.668637    4599 addons.go:243] addon default-storageclass should already be in state true
	I1025 18:47:36.668644    4599 host.go:66] Checking if "running-upgrade-889000" exists ...
	I1025 18:47:36.671443    4599 out.go:177] * Verifying Kubernetes components...
	I1025 18:47:36.671782    4599 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 18:47:36.675607    4599 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 18:47:36.675613    4599 sshutil.go:53] new ssh client: &{IP:localhost Port:62290 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/running-upgrade-889000/id_rsa Username:docker}
	I1025 18:47:36.681428    4599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:47:36.685475    4599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:47:36.689422    4599 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:47:36.689429    4599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 18:47:36.689436    4599 sshutil.go:53] new ssh client: &{IP:localhost Port:62290 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/running-upgrade-889000/id_rsa Username:docker}
	I1025 18:47:36.776192    4599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 18:47:36.781457    4599 api_server.go:52] waiting for apiserver process to appear ...
	I1025 18:47:36.781518    4599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:47:36.784056    4599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 18:47:36.787151    4599 api_server.go:72] duration metric: took 119.947375ms to wait for apiserver process to appear ...
	I1025 18:47:36.787159    4599 api_server.go:88] waiting for apiserver healthz status ...
	I1025 18:47:36.787166    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:36.810641    4599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:47:37.102085    4599 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 18:47:37.102096    4599 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 18:47:41.789358    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:41.789383    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:46.789721    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:46.789741    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:51.790183    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:51.790206    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:56.790684    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:56.790713    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:01.791320    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:01.791382    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:06.792132    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:06.792148    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1025 18:48:07.103181    4599 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1025 18:48:07.107651    4599 out.go:177] * Enabled addons: storage-provisioner
	I1025 18:48:07.115605    4599 addons.go:510] duration metric: took 30.4476695s for enable addons: enabled=[storage-provisioner]
	I1025 18:48:11.792987    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:11.793042    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:16.793963    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:16.793985    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:21.795292    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:21.795332    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:26.797017    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:26.797051    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:31.799152    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:31.799197    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:36.801610    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:36.801707    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:36.813665    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:48:36.813742    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:36.823991    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:48:36.824074    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:36.834239    4599 logs.go:282] 2 containers: [60d180d33f33 6b3f7166e29d]
	I1025 18:48:36.834320    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:36.844640    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:48:36.844716    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:36.855581    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:48:36.855669    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:36.866769    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:48:36.866839    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:36.877175    4599 logs.go:282] 0 containers: []
	W1025 18:48:36.877187    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:36.877261    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:36.887819    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:48:36.887834    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:48:36.887840    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:48:36.902623    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:48:36.902634    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:48:36.914622    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:36.914638    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:36.939623    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:48:36.939632    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:36.952153    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:36.952163    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:36.987113    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:48:36.987122    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:48:37.002288    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:48:37.002304    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:48:37.014051    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:48:37.014064    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:48:37.026050    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:48:37.026062    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:48:37.043350    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:48:37.043359    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:48:37.054975    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:37.054989    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:37.059433    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:37.059438    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:37.094344    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:48:37.094359    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:48:39.613229    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:44.615802    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:44.616083    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:44.643335    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:48:44.643474    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:44.659653    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:48:44.659740    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:44.672902    4599 logs.go:282] 2 containers: [60d180d33f33 6b3f7166e29d]
	I1025 18:48:44.672984    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:44.684032    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:48:44.684111    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:44.694889    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:48:44.694977    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:44.705362    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:48:44.705430    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:44.715330    4599 logs.go:282] 0 containers: []
	W1025 18:48:44.715344    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:44.715403    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:44.728351    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:48:44.728365    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:48:44.728371    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:48:44.740417    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:48:44.740428    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:48:44.755681    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:48:44.755693    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:48:44.773428    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:48:44.773439    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:44.784583    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:44.784594    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:44.789494    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:44.789504    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:44.825280    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:48:44.825292    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:48:44.839794    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:48:44.839805    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:48:44.851645    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:48:44.851657    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:48:44.863265    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:44.863275    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:44.888953    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:44.888967    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:44.925428    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:48:44.925442    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:48:44.939761    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:48:44.939771    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:48:47.453614    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:52.456605    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:52.457100    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:52.501427    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:48:52.501583    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:52.521087    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:48:52.521200    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:52.534458    4599 logs.go:282] 2 containers: [60d180d33f33 6b3f7166e29d]
	I1025 18:48:52.534536    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:52.546847    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:48:52.546925    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:52.557851    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:48:52.557926    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:52.568578    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:48:52.568663    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:52.579602    4599 logs.go:282] 0 containers: []
	W1025 18:48:52.579613    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:52.579688    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:52.590421    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:48:52.590436    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:52.590442    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:52.615554    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:52.615564    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:52.650955    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:48:52.650966    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:48:52.665518    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:48:52.665530    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:48:52.679675    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:48:52.679685    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:48:52.691418    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:48:52.691429    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:48:52.702585    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:48:52.702595    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:48:52.720298    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:48:52.720309    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:52.732689    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:52.732699    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:52.737303    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:52.737310    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:52.771892    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:48:52.771903    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:48:52.784229    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:48:52.784240    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:48:52.799570    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:48:52.799582    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:48:55.315158    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:00.316223    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:00.316409    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:00.335197    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:49:00.335306    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:00.349188    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:49:00.349268    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:00.360920    4599 logs.go:282] 2 containers: [60d180d33f33 6b3f7166e29d]
	I1025 18:49:00.360995    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:00.371855    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:49:00.371933    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:00.388033    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:49:00.388107    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:00.398733    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:49:00.398809    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:00.408944    4599 logs.go:282] 0 containers: []
	W1025 18:49:00.408961    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:00.409027    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:00.419362    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:49:00.419376    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:00.419380    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:00.454061    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:00.454072    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:00.458301    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:49:00.458310    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:49:00.470039    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:49:00.470051    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:49:00.483770    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:49:00.483781    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:00.495748    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:00.495758    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:00.520431    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:00.520441    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:00.556194    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:49:00.556205    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:49:00.570429    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:49:00.570440    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:49:00.584164    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:49:00.584175    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:49:00.595682    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:49:00.595693    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:49:00.611008    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:49:00.611019    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:49:00.623161    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:49:00.623172    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:49:03.142698    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:08.145121    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:08.145394    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:08.161956    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:49:08.162058    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:08.174926    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:49:08.175003    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:08.188442    4599 logs.go:282] 2 containers: [60d180d33f33 6b3f7166e29d]
	I1025 18:49:08.188511    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:08.198887    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:49:08.198967    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:08.214546    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:49:08.214632    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:08.225023    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:49:08.225090    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:08.234815    4599 logs.go:282] 0 containers: []
	W1025 18:49:08.234825    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:08.234895    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:08.248747    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:49:08.248762    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:49:08.248768    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:49:08.264061    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:49:08.264075    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:49:08.278964    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:08.278977    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:08.302441    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:08.302449    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:08.336248    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:08.336257    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:08.372344    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:49:08.372355    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:49:08.387099    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:49:08.387110    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:49:08.408976    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:49:08.408988    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:49:08.421222    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:49:08.421233    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:08.434314    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:08.434327    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:08.438722    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:49:08.438730    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:49:08.450670    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:49:08.450681    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:49:08.462451    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:49:08.462462    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:49:10.986278    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:15.988614    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:15.988834    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:16.013132    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:49:16.013258    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:16.027797    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:49:16.027884    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:16.039757    4599 logs.go:282] 2 containers: [60d180d33f33 6b3f7166e29d]
	I1025 18:49:16.039837    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:16.050380    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:49:16.050458    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:16.060589    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:49:16.060664    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:16.071084    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:49:16.071165    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:16.081258    4599 logs.go:282] 0 containers: []
	W1025 18:49:16.081271    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:16.081338    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:16.091739    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:49:16.091752    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:49:16.091757    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:49:16.103009    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:16.103022    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:16.128527    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:49:16.128538    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:49:16.145503    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:49:16.145514    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:49:16.160522    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:49:16.160536    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:49:16.173041    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:49:16.173052    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:49:16.198761    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:49:16.198772    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:49:16.212698    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:49:16.212710    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:49:16.224343    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:49:16.224354    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:16.236108    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:16.236121    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:16.271427    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:16.271438    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:16.275854    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:16.275863    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:16.309985    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:49:16.309995    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:49:18.826995    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:23.827708    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:23.827894    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:23.840811    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:49:23.840904    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:23.855761    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:49:23.855840    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:23.866650    4599 logs.go:282] 2 containers: [60d180d33f33 6b3f7166e29d]
	I1025 18:49:23.866736    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:23.878383    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:49:23.878454    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:23.888828    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:49:23.888911    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:23.899305    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:49:23.899375    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:23.912522    4599 logs.go:282] 0 containers: []
	W1025 18:49:23.912536    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:23.912596    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:23.926884    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:49:23.926901    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:23.926907    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:23.963460    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:49:23.963470    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:49:23.979027    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:49:23.979038    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:49:23.991732    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:49:23.991745    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:49:24.009002    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:49:24.009013    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:24.021687    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:24.021699    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:24.055584    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:24.055594    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:24.059793    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:49:24.059801    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:49:24.078097    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:49:24.078107    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:49:24.092023    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:49:24.092033    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:49:24.103338    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:49:24.103347    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:49:24.116353    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:49:24.116363    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:49:24.131686    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:24.131698    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:26.656795    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:31.659310    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:31.659484    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:31.670348    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:49:31.670431    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:31.681126    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:49:31.681203    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:31.692826    4599 logs.go:282] 2 containers: [60d180d33f33 6b3f7166e29d]
	I1025 18:49:31.692917    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:31.703357    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:49:31.703435    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:31.713724    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:49:31.713809    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:31.725031    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:49:31.725108    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:31.738431    4599 logs.go:282] 0 containers: []
	W1025 18:49:31.738442    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:31.738509    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:31.755380    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:49:31.755399    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:49:31.755407    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:49:31.770095    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:49:31.770108    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:49:31.781329    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:49:31.781340    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:49:31.793449    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:49:31.793458    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:49:31.808665    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:49:31.808673    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:49:31.826710    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:31.826719    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:31.851510    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:49:31.851517    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:31.863133    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:31.863145    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:31.896237    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:31.896243    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:31.900640    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:31.900646    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:31.935385    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:49:31.935402    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:49:31.949415    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:49:31.949425    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:49:31.961194    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:49:31.961210    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:49:34.475039    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:39.477819    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:39.478045    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:39.500874    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:49:39.501004    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:39.517175    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:49:39.517267    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:39.530114    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:49:39.530192    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:39.541558    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:49:39.541638    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:39.552014    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:49:39.552097    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:39.562740    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:49:39.562822    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:39.573120    4599 logs.go:282] 0 containers: []
	W1025 18:49:39.573138    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:39.573200    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:39.584018    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:49:39.584038    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:49:39.584044    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:49:39.598354    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:49:39.598364    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:49:39.620253    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:39.620266    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:39.646308    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:39.646315    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:39.650893    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:39.650903    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:39.686056    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:49:39.686067    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:49:39.700122    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:49:39.700135    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:49:39.711346    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:49:39.711358    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:49:39.728875    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:49:39.728886    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:49:39.745740    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:39.745750    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:39.779508    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:49:39.779516    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:49:39.795218    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:49:39.795229    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:49:39.806767    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:49:39.806779    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:49:39.818148    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:49:39.818161    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:49:39.833643    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:49:39.833657    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:42.348579    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:47.351307    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:47.351487    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:47.367992    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:49:47.368079    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:47.380601    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:49:47.380680    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:47.398193    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:49:47.398271    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:47.408913    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:49:47.408983    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:47.419165    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:49:47.419244    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:47.434009    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:49:47.434082    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:47.444477    4599 logs.go:282] 0 containers: []
	W1025 18:49:47.444488    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:47.444550    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:47.455152    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:49:47.455172    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:49:47.455178    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:49:47.470051    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:49:47.470064    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:47.482205    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:47.482221    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:47.516451    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:49:47.516465    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:49:47.529239    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:49:47.529250    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:49:47.540120    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:47.540134    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:47.564055    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:49:47.564064    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:49:47.578565    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:49:47.578575    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:49:47.596447    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:49:47.596456    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:49:47.610774    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:49:47.610783    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:49:47.624897    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:49:47.624908    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:49:47.636403    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:49:47.636414    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:49:47.647787    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:49:47.647798    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:49:47.659170    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:47.659181    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:47.663767    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:47.663774    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:50.201479    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:55.203966    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:55.204084    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:55.215836    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:49:55.215911    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:55.228098    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:49:55.228179    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:55.238682    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:49:55.238762    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:55.249199    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:49:55.249270    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:55.259550    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:49:55.259622    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:55.269755    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:49:55.269820    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:55.280161    4599 logs.go:282] 0 containers: []
	W1025 18:49:55.280175    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:55.280245    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:55.293174    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:49:55.293191    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:49:55.293197    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:49:55.307107    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:49:55.307117    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:49:55.324110    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:49:55.324122    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:55.336752    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:55.336766    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:55.341527    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:55.341534    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:55.377469    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:49:55.377483    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:49:55.391502    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:49:55.391512    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:49:55.402921    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:49:55.402933    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:49:55.418988    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:49:55.419000    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:49:55.436822    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:55.436832    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:55.470897    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:49:55.470910    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:49:55.482954    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:49:55.482966    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:49:55.494865    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:49:55.494876    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:49:55.506391    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:49:55.506403    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:49:55.518364    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:55.518374    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:58.045791    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:03.048208    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:03.048346    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:03.064329    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:50:03.064423    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:03.074922    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:50:03.075008    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:03.089399    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:50:03.089470    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:03.099811    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:50:03.099877    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:03.113512    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:50:03.113579    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:03.124136    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:50:03.124213    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:03.134376    4599 logs.go:282] 0 containers: []
	W1025 18:50:03.134389    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:03.134452    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:03.149884    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:50:03.149902    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:03.149908    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:03.155091    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:03.155097    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:03.179984    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:50:03.179993    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:50:03.191991    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:50:03.192001    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:50:03.203888    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:03.203898    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:03.237934    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:50:03.237943    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:50:03.254095    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:50:03.254112    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:50:03.268439    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:50:03.268449    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:50:03.283455    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:50:03.283468    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:50:03.299346    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:50:03.299356    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:50:03.325667    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:50:03.325682    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:50:03.345014    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:50:03.345025    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:03.357754    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:03.357766    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:03.393636    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:50:03.393649    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:50:03.405365    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:50:03.405377    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:50:05.919588    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:10.922069    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:10.922212    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:10.936167    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:50:10.936253    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:10.947321    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:50:10.947392    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:10.958278    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:50:10.958368    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:10.969557    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:50:10.969629    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:10.979778    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:50:10.979859    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:10.990697    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:50:10.990777    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:11.000686    4599 logs.go:282] 0 containers: []
	W1025 18:50:11.000700    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:11.000765    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:11.019264    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:50:11.019282    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:50:11.019287    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:50:11.031406    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:50:11.031417    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:50:11.050226    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:50:11.050237    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:50:11.061648    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:11.061659    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:11.088018    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:50:11.088027    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:50:11.102009    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:50:11.102019    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:50:11.113932    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:11.113942    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:11.153842    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:50:11.153853    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:50:11.166917    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:50:11.166928    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:50:11.178884    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:11.178895    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:11.212811    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:11.212817    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:11.216982    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:50:11.216991    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:11.228140    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:50:11.228152    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:50:11.242465    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:50:11.242481    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:50:11.265092    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:50:11.265102    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:50:13.786663    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:18.787896    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:18.788064    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:18.799198    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:50:18.799274    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:18.809321    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:50:18.809400    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:18.824407    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:50:18.824491    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:18.834685    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:50:18.834764    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:18.845733    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:50:18.845811    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:18.856584    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:50:18.856653    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:18.866958    4599 logs.go:282] 0 containers: []
	W1025 18:50:18.866969    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:18.867030    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:18.877301    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:50:18.877324    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:50:18.877329    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:50:18.892177    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:50:18.892191    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:18.904185    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:18.904195    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:18.939648    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:50:18.939658    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:50:18.955557    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:50:18.955567    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:50:18.967547    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:50:18.967561    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:50:18.979077    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:50:18.979088    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:50:18.994311    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:18.994325    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:19.030160    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:19.030173    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:19.035192    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:50:19.035201    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:50:19.046740    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:19.046751    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:19.072209    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:50:19.072217    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:50:19.085996    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:50:19.086005    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:50:19.097671    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:50:19.097683    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:50:19.109827    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:50:19.109840    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:50:21.629944    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:26.632420    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:26.632503    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:26.643029    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:50:26.643120    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:26.653767    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:50:26.653845    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:26.664083    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:50:26.664155    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:26.674811    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:50:26.674894    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:26.684951    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:50:26.685027    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:26.695017    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:50:26.695090    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:26.712156    4599 logs.go:282] 0 containers: []
	W1025 18:50:26.712170    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:26.712242    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:26.724765    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:50:26.724786    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:50:26.724791    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:50:26.737795    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:50:26.737809    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:50:26.749637    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:50:26.749647    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:50:26.767547    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:50:26.767560    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:26.779783    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:26.779793    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:26.784716    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:50:26.784723    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:50:26.802264    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:50:26.802275    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:50:26.814265    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:50:26.814277    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:50:26.826862    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:50:26.826873    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:50:26.844193    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:26.844203    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:26.879645    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:26.879655    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:26.904109    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:26.904119    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:26.940518    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:50:26.940529    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:50:26.954216    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:50:26.954227    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:50:26.965911    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:50:26.965922    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:50:29.479459    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:34.481184    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:34.481282    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:34.492395    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:50:34.492478    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:34.502685    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:50:34.502765    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:34.513763    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:50:34.513859    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:34.525530    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:50:34.525608    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:34.536440    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:50:34.536509    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:34.547012    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:50:34.547082    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:34.557398    4599 logs.go:282] 0 containers: []
	W1025 18:50:34.557410    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:34.557468    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:34.568118    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:50:34.568135    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:34.568141    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:34.604178    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:50:34.604188    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:50:34.619080    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:50:34.619092    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:50:34.634669    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:50:34.634679    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:50:34.652596    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:50:34.652606    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:50:34.666554    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:50:34.666568    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:50:34.678346    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:34.678357    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:34.715279    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:50:34.715293    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:50:34.729121    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:50:34.729132    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:50:34.742035    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:50:34.742049    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:50:34.757999    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:50:34.758011    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:34.770645    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:34.770657    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:34.775609    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:50:34.775620    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:50:34.791279    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:50:34.791292    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:50:34.804619    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:34.804630    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:37.330380    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:42.332984    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:42.333086    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:42.348280    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:50:42.348359    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:42.360176    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:50:42.360252    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:42.371794    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:50:42.371900    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:42.382871    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:50:42.382948    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:42.394958    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:50:42.395037    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:42.406286    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:50:42.406362    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:42.421032    4599 logs.go:282] 0 containers: []
	W1025 18:50:42.421044    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:42.421112    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:42.431939    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:50:42.431959    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:50:42.431965    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:50:42.443613    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:50:42.443622    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:50:42.455305    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:50:42.455317    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:42.467078    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:50:42.467091    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:50:42.484686    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:50:42.484697    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:50:42.496224    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:50:42.496235    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:50:42.511204    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:50:42.511219    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:50:42.529509    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:42.529522    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:42.554133    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:42.554148    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:42.589939    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:42.589948    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:42.594852    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:50:42.594861    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:50:42.606312    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:50:42.606327    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:50:42.618663    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:50:42.618674    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:50:42.630518    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:42.630529    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:42.666500    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:50:42.666510    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:50:45.182656    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:50.185022    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:50.185119    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:50.196630    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:50:50.196715    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:50.207912    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:50:50.207986    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:50.219199    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:50:50.219285    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:50.241717    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:50:50.241801    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:50.252922    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:50:50.253000    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:50.264661    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:50:50.264748    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:50.281283    4599 logs.go:282] 0 containers: []
	W1025 18:50:50.281296    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:50.281375    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:50.293397    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:50:50.293415    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:50.293421    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:50.298823    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:50:50.298913    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:50:50.315092    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:50:50.315105    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:50:50.327624    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:50:50.327635    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:50:50.339090    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:50:50.339106    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:50:50.351591    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:50:50.351607    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:50:50.370320    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:50:50.370334    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:50:50.391427    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:50.391437    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:50.426946    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:50.426954    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:50.462036    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:50:50.462049    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:50:50.480592    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:50.480605    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:50.504793    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:50:50.504803    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:50.516233    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:50:50.516248    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:50:50.531159    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:50:50.531169    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:50:50.543066    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:50:50.543081    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:50:53.057567    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:58.059608    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:58.059726    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:58.071319    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:50:58.071409    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:58.082712    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:50:58.082828    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:58.094647    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:50:58.094735    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:58.107186    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:50:58.107266    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:58.124101    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:50:58.124185    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:58.139478    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:50:58.139563    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:58.153563    4599 logs.go:282] 0 containers: []
	W1025 18:50:58.153573    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:58.153647    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:58.165007    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:50:58.165024    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:50:58.165029    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:50:58.183221    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:50:58.183236    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:50:58.195446    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:58.195458    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:58.222291    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:50:58.222306    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:50:58.235595    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:50:58.235611    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:50:58.251967    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:50:58.251980    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:50:58.272599    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:58.272616    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:58.277642    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:50:58.277656    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:50:58.293814    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:50:58.293831    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:50:58.311930    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:50:58.311945    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:50:58.324780    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:50:58.324797    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:50:58.338259    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:50:58.338271    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:58.350640    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:58.350652    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:58.388535    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:58.388560    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:58.426631    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:50:58.426648    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:51:00.941757    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:05.942484    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:05.942608    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:51:05.955286    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:51:05.955372    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:51:05.967151    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:51:05.967241    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:51:05.978770    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:51:05.978847    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:51:05.990081    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:51:05.990160    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:51:06.000726    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:51:06.000802    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:51:06.013332    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:51:06.013410    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:51:06.023247    4599 logs.go:282] 0 containers: []
	W1025 18:51:06.023259    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:51:06.023322    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:51:06.033993    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:51:06.034012    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:51:06.034018    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:51:06.045834    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:51:06.045845    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:51:06.058200    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:51:06.058213    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:51:06.072198    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:51:06.072211    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:51:06.090248    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:51:06.090261    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:51:06.102063    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:51:06.102076    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:51:06.137531    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:51:06.137547    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:51:06.151951    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:51:06.151965    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:51:06.163750    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:51:06.163766    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:51:06.176132    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:51:06.176144    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:51:06.188132    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:51:06.188145    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:51:06.212535    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:51:06.212545    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:51:06.217040    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:51:06.217047    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:51:06.234130    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:51:06.234142    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:51:06.249120    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:51:06.249132    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:51:08.786338    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:13.788735    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:13.788884    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:51:13.799520    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:51:13.799600    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:51:13.810352    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:51:13.810429    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:51:13.821403    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:51:13.821475    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:51:13.831821    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:51:13.831905    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:51:13.843166    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:51:13.843229    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:51:13.853933    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:51:13.854011    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:51:13.863767    4599 logs.go:282] 0 containers: []
	W1025 18:51:13.863779    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:51:13.863846    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:51:13.874435    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:51:13.874453    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:51:13.874460    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:51:13.886559    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:51:13.886570    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:51:13.898771    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:51:13.898783    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:51:13.934628    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:51:13.934640    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:51:13.946661    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:51:13.946672    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:51:13.967239    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:51:13.967250    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:51:13.980543    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:51:13.980556    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:51:13.985065    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:51:13.985072    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:51:13.997027    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:51:13.997038    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:51:14.009645    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:51:14.009656    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:51:14.032708    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:51:14.032718    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:51:14.066356    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:51:14.066366    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:51:14.080654    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:51:14.080665    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:51:14.093311    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:51:14.093322    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:51:14.110623    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:51:14.110633    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:51:16.627492    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:21.629901    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:21.630135    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:51:21.653572    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:51:21.653691    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:51:21.669979    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:51:21.670079    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:51:21.683104    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:51:21.683191    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:51:21.694736    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:51:21.694806    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:51:21.705420    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:51:21.705498    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:51:21.715483    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:51:21.715558    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:51:21.726482    4599 logs.go:282] 0 containers: []
	W1025 18:51:21.726494    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:51:21.726561    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:51:21.736601    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:51:21.736623    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:51:21.736628    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:51:21.750876    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:51:21.750887    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:51:21.763154    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:51:21.763167    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:51:21.775070    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:51:21.775081    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:51:21.787648    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:51:21.787659    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:51:21.812534    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:51:21.812543    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:51:21.816932    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:51:21.816938    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:51:21.828646    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:51:21.828657    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:51:21.843630    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:51:21.843644    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:51:21.856153    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:51:21.856166    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:51:21.892345    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:51:21.892357    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:51:21.910327    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:51:21.910342    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:51:21.922323    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:51:21.922334    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:51:21.958319    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:51:21.958328    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:51:21.972664    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:51:21.972676    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:51:24.490255    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:29.492700    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:29.492851    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:51:29.505171    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:51:29.505251    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:51:29.515619    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:51:29.515697    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:51:29.526320    4599 logs.go:282] 4 containers: [fece211667ee d3bfc54bf916 dbf479c07baa 73883d1045df]
	I1025 18:51:29.526403    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:51:29.536574    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:51:29.536653    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:51:29.546782    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:51:29.546849    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:51:29.557391    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:51:29.557468    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:51:29.567614    4599 logs.go:282] 0 containers: []
	W1025 18:51:29.567624    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:51:29.567684    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:51:29.578660    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:51:29.578677    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:51:29.578683    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:51:29.583262    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:51:29.583272    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:51:29.596421    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:51:29.596434    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:51:29.612606    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:51:29.612619    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:51:29.624478    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:51:29.624489    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:51:29.638796    4599 logs.go:123] Gathering logs for coredns [fece211667ee] ...
	I1025 18:51:29.638806    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fece211667ee"
	I1025 18:51:29.649549    4599 logs.go:123] Gathering logs for coredns [d3bfc54bf916] ...
	I1025 18:51:29.649558    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3bfc54bf916"
	I1025 18:51:29.664694    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:51:29.664708    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:51:29.689178    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:51:29.689185    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:51:29.724260    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:51:29.724269    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:51:29.761173    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:51:29.761185    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:51:29.773338    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:51:29.773349    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:51:29.790749    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:51:29.790760    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:51:29.802836    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:51:29.802848    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:51:29.814516    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:51:29.814528    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:51:32.328827    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:37.331255    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:37.335956    4599 out.go:201] 
	W1025 18:51:37.339777    4599 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1025 18:51:37.339788    4599 out.go:270] * 
	* 
	W1025 18:51:37.340562    4599 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:51:37.350775    4599 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-889000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-10-25 18:51:37.453693 -0700 PDT m=+4137.954072251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-889000 -n running-upgrade-889000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-889000 -n running-upgrade-889000: exit status 2 (15.601949125s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-889000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-194000          | force-systemd-flag-194000 | jenkins | v1.34.0 | 25 Oct 24 18:41 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-390000              | force-systemd-env-390000  | jenkins | v1.34.0 | 25 Oct 24 18:41 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-390000           | force-systemd-env-390000  | jenkins | v1.34.0 | 25 Oct 24 18:41 PDT | 25 Oct 24 18:41 PDT |
	| start   | -p docker-flags-922000                | docker-flags-922000       | jenkins | v1.34.0 | 25 Oct 24 18:41 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-194000             | force-systemd-flag-194000 | jenkins | v1.34.0 | 25 Oct 24 18:41 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-194000          | force-systemd-flag-194000 | jenkins | v1.34.0 | 25 Oct 24 18:41 PDT | 25 Oct 24 18:41 PDT |
	| start   | -p cert-expiration-614000             | cert-expiration-614000    | jenkins | v1.34.0 | 25 Oct 24 18:41 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-922000 ssh               | docker-flags-922000       | jenkins | v1.34.0 | 25 Oct 24 18:42 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-922000 ssh               | docker-flags-922000       | jenkins | v1.34.0 | 25 Oct 24 18:42 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-922000                | docker-flags-922000       | jenkins | v1.34.0 | 25 Oct 24 18:42 PDT | 25 Oct 24 18:42 PDT |
	| start   | -p cert-options-814000                | cert-options-814000       | jenkins | v1.34.0 | 25 Oct 24 18:42 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-814000 ssh               | cert-options-814000       | jenkins | v1.34.0 | 25 Oct 24 18:42 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-814000 -- sudo        | cert-options-814000       | jenkins | v1.34.0 | 25 Oct 24 18:42 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-814000                | cert-options-814000       | jenkins | v1.34.0 | 25 Oct 24 18:42 PDT | 25 Oct 24 18:42 PDT |
	| start   | -p running-upgrade-889000             | minikube                  | jenkins | v1.26.0 | 25 Oct 24 18:42 PDT | 25 Oct 24 18:43 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-889000             | running-upgrade-889000    | jenkins | v1.34.0 | 25 Oct 24 18:43 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-614000             | cert-expiration-614000    | jenkins | v1.34.0 | 25 Oct 24 18:45 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-614000             | cert-expiration-614000    | jenkins | v1.34.0 | 25 Oct 24 18:45 PDT | 25 Oct 24 18:45 PDT |
	| start   | -p kubernetes-upgrade-507000          | kubernetes-upgrade-507000 | jenkins | v1.34.0 | 25 Oct 24 18:45 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-507000          | kubernetes-upgrade-507000 | jenkins | v1.34.0 | 25 Oct 24 18:45 PDT | 25 Oct 24 18:45 PDT |
	| start   | -p kubernetes-upgrade-507000          | kubernetes-upgrade-507000 | jenkins | v1.34.0 | 25 Oct 24 18:45 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-507000          | kubernetes-upgrade-507000 | jenkins | v1.34.0 | 25 Oct 24 18:45 PDT | 25 Oct 24 18:45 PDT |
	| start   | -p stopped-upgrade-473000             | minikube                  | jenkins | v1.26.0 | 25 Oct 24 18:45 PDT | 25 Oct 24 18:46 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-473000 stop           | minikube                  | jenkins | v1.26.0 | 25 Oct 24 18:46 PDT | 25 Oct 24 18:46 PDT |
	| start   | -p stopped-upgrade-473000             | stopped-upgrade-473000    | jenkins | v1.34.0 | 25 Oct 24 18:46 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/25 18:46:25
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 18:46:20.657709    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:46:25.169326    4810 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:46:25.169505    4810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:46:25.169509    4810 out.go:358] Setting ErrFile to fd 2...
	I1025 18:46:25.169512    4810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:46:25.169675    4810 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:46:25.170899    4810 out.go:352] Setting JSON to false
	I1025 18:46:25.191438    4810 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4556,"bootTime":1729902629,"procs":559,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:46:25.191516    4810 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:46:25.199246    4810 out.go:177] * [stopped-upgrade-473000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:46:25.208244    4810 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:46:25.208298    4810 notify.go:220] Checking for updates...
	I1025 18:46:25.215200    4810 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:46:25.219182    4810 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:46:25.222219    4810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:46:25.225227    4810 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:46:25.228288    4810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:46:25.231618    4810 config.go:182] Loaded profile config "stopped-upgrade-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:46:25.233143    4810 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1025 18:46:25.236274    4810 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:46:25.240254    4810 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 18:46:25.245225    4810 start.go:297] selected driver: qemu2
	I1025 18:46:25.245231    4810 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-473000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62543 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-473000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1025 18:46:25.245302    4810 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:46:25.248036    4810 cni.go:84] Creating CNI manager for ""
	I1025 18:46:25.248070    4810 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:46:25.248097    4810 start.go:340] cluster config:
	{Name:stopped-upgrade-473000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62543 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-473000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1025 18:46:25.248154    4810 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:46:25.255223    4810 out.go:177] * Starting "stopped-upgrade-473000" primary control-plane node in "stopped-upgrade-473000" cluster
	I1025 18:46:25.259226    4810 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1025 18:46:25.259243    4810 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1025 18:46:25.259250    4810 cache.go:56] Caching tarball of preloaded images
	I1025 18:46:25.259326    4810 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:46:25.259333    4810 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1025 18:46:25.259384    4810 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/config.json ...
	I1025 18:46:25.259770    4810 start.go:360] acquireMachinesLock for stopped-upgrade-473000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:46:25.259817    4810 start.go:364] duration metric: took 40.417µs to acquireMachinesLock for "stopped-upgrade-473000"
	I1025 18:46:25.259825    4810 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:46:25.259830    4810 fix.go:54] fixHost starting: 
	I1025 18:46:25.259942    4810 fix.go:112] recreateIfNeeded on stopped-upgrade-473000: state=Stopped err=<nil>
	W1025 18:46:25.259951    4810 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:46:25.264198    4810 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-473000" ...
	I1025 18:46:25.272037    4810 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:46:25.272115    4810 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/qemu.pid -nic user,model=virtio,hostfwd=tcp::62508-:22,hostfwd=tcp::62509-:2376,hostname=stopped-upgrade-473000 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/disk.qcow2
	I1025 18:46:25.319244    4810 main.go:141] libmachine: STDOUT: 
	I1025 18:46:25.319284    4810 main.go:141] libmachine: STDERR: 
	I1025 18:46:25.319292    4810 main.go:141] libmachine: Waiting for VM to start (ssh -p 62508 docker@127.0.0.1)...
	I1025 18:46:25.660068    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:46:25.660294    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:46:25.677700    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:46:25.677778    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:46:25.689714    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:46:25.689795    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:46:25.700274    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:46:25.700348    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:46:25.710814    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:46:25.710899    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:46:25.728868    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:46:25.728946    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:46:25.739116    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:46:25.739189    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:46:25.748914    4599 logs.go:282] 0 containers: []
	W1025 18:46:25.748927    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:46:25.749000    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:46:25.763387    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:46:25.763407    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:46:25.763412    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:46:25.775122    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:46:25.775132    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:46:25.799943    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:46:25.799959    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:46:25.822813    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:46:25.822828    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:46:25.835191    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:46:25.835204    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:46:25.873403    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:46:25.873417    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:46:25.916173    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:46:25.916189    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:46:25.929853    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:46:25.929867    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:46:25.947613    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:46:25.947623    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:46:25.965579    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:46:25.965594    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:46:25.976933    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:46:25.976946    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:46:25.988145    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:46:25.988155    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:46:25.999439    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:46:25.999450    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:46:26.022071    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:46:26.022082    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:46:26.033486    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:46:26.033499    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:46:26.038424    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:46:26.038430    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:46:26.051012    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:46:26.051022    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:46:28.568700    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:46:33.571422    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:46:33.571667    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:46:33.583879    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:46:33.583968    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:46:33.595074    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:46:33.595155    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:46:33.605778    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:46:33.605842    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:46:33.618767    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:46:33.618854    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:46:33.630131    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:46:33.630216    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:46:33.641466    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:46:33.641536    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:46:33.651731    4599 logs.go:282] 0 containers: []
	W1025 18:46:33.651741    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:46:33.651799    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:46:33.662030    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:46:33.662047    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:46:33.662051    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:46:33.701571    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:46:33.701581    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:46:33.706343    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:46:33.706351    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:46:33.742119    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:46:33.742130    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:46:33.756423    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:46:33.756433    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:46:33.771880    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:46:33.771889    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:46:33.790297    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:46:33.790308    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:46:33.802169    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:46:33.802182    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:46:33.813471    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:46:33.813483    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:46:33.825114    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:46:33.825126    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:46:33.837168    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:46:33.837180    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:46:33.854736    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:46:33.854746    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:46:33.867567    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:46:33.867578    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:46:33.879154    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:46:33.879166    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:46:33.904288    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:46:33.904300    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:46:33.919143    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:46:33.919152    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:46:33.935295    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:46:33.935307    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:46:36.450249    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:46:41.451280    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:46:41.451810    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:46:41.491205    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:46:41.491372    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:46:41.508720    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:46:41.508835    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:46:41.525317    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:46:41.525415    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:46:41.537427    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:46:41.537521    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:46:41.550472    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:46:41.550543    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:46:41.561161    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:46:41.561255    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:46:41.571135    4599 logs.go:282] 0 containers: []
	W1025 18:46:41.571149    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:46:41.571216    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:46:41.583696    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:46:41.583715    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:46:41.583721    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:46:41.624426    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:46:41.624437    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:46:41.636048    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:46:41.636059    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:46:41.647169    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:46:41.647179    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:46:41.658867    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:46:41.658879    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:46:41.663087    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:46:41.663093    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:46:41.675741    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:46:41.675751    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:46:41.689378    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:46:41.689390    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:46:41.700808    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:46:41.700818    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:46:41.720039    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:46:41.720049    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:46:41.743724    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:46:41.743736    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:46:41.781701    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:46:41.781708    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:46:41.796617    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:46:41.796627    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:46:41.807598    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:46:41.807608    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:46:41.820409    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:46:41.820422    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:46:41.834014    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:46:41.834027    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:46:41.848459    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:46:41.848470    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:46:44.362038    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:46:45.217687    4810 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/config.json ...
	I1025 18:46:45.218502    4810 machine.go:93] provisionDockerMachine start ...
	I1025 18:46:45.218759    4810 main.go:141] libmachine: Using SSH client type: native
	I1025 18:46:45.219348    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033f65f0] 0x1033f8e30 <nil>  [] 0s} localhost 62508 <nil> <nil>}
	I1025 18:46:45.219362    4810 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 18:46:45.314922    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1025 18:46:45.314958    4810 buildroot.go:166] provisioning hostname "stopped-upgrade-473000"
	I1025 18:46:45.315111    4810 main.go:141] libmachine: Using SSH client type: native
	I1025 18:46:45.315341    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033f65f0] 0x1033f8e30 <nil>  [] 0s} localhost 62508 <nil> <nil>}
	I1025 18:46:45.315353    4810 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-473000 && echo "stopped-upgrade-473000" | sudo tee /etc/hostname
	I1025 18:46:45.403086    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-473000
	
	I1025 18:46:45.403172    4810 main.go:141] libmachine: Using SSH client type: native
	I1025 18:46:45.403313    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033f65f0] 0x1033f8e30 <nil>  [] 0s} localhost 62508 <nil> <nil>}
	I1025 18:46:45.403341    4810 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-473000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-473000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-473000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 18:46:45.482748    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:46:45.482760    4810 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19868-1112/.minikube CaCertPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19868-1112/.minikube}
	I1025 18:46:45.482780    4810 buildroot.go:174] setting up certificates
	I1025 18:46:45.482785    4810 provision.go:84] configureAuth start
	I1025 18:46:45.482792    4810 provision.go:143] copyHostCerts
	I1025 18:46:45.482875    4810 exec_runner.go:144] found /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.pem, removing ...
	I1025 18:46:45.482882    4810 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.pem
	I1025 18:46:45.483005    4810 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.pem (1082 bytes)
	I1025 18:46:45.483219    4810 exec_runner.go:144] found /Users/jenkins/minikube-integration/19868-1112/.minikube/cert.pem, removing ...
	I1025 18:46:45.483224    4810 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19868-1112/.minikube/cert.pem
	I1025 18:46:45.483287    4810 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19868-1112/.minikube/cert.pem (1123 bytes)
	I1025 18:46:45.483424    4810 exec_runner.go:144] found /Users/jenkins/minikube-integration/19868-1112/.minikube/key.pem, removing ...
	I1025 18:46:45.483429    4810 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19868-1112/.minikube/key.pem
	I1025 18:46:45.483486    4810 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19868-1112/.minikube/key.pem (1675 bytes)
	I1025 18:46:45.483587    4810 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-473000 san=[127.0.0.1 localhost minikube stopped-upgrade-473000]
	I1025 18:46:45.632215    4810 provision.go:177] copyRemoteCerts
	I1025 18:46:45.632282    4810 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 18:46:45.632291    4810 sshutil.go:53] new ssh client: &{IP:localhost Port:62508 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/id_rsa Username:docker}
	I1025 18:46:45.671274    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 18:46:45.678738    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 18:46:45.685690    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 18:46:45.692463    4810 provision.go:87] duration metric: took 209.674166ms to configureAuth
	I1025 18:46:45.692473    4810 buildroot.go:189] setting minikube options for container-runtime
	I1025 18:46:45.692586    4810 config.go:182] Loaded profile config "stopped-upgrade-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:46:45.692640    4810 main.go:141] libmachine: Using SSH client type: native
	I1025 18:46:45.692735    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033f65f0] 0x1033f8e30 <nil>  [] 0s} localhost 62508 <nil> <nil>}
	I1025 18:46:45.692758    4810 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 18:46:45.765557    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1025 18:46:45.765565    4810 buildroot.go:70] root file system type: tmpfs
	I1025 18:46:45.765630    4810 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 18:46:45.765708    4810 main.go:141] libmachine: Using SSH client type: native
	I1025 18:46:45.765820    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033f65f0] 0x1033f8e30 <nil>  [] 0s} localhost 62508 <nil> <nil>}
	I1025 18:46:45.765853    4810 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 18:46:45.841589    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 18:46:45.841651    4810 main.go:141] libmachine: Using SSH client type: native
	I1025 18:46:45.841757    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033f65f0] 0x1033f8e30 <nil>  [] 0s} localhost 62508 <nil> <nil>}
	I1025 18:46:45.841766    4810 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 18:46:46.233044    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1025 18:46:46.233058    4810 machine.go:96] duration metric: took 1.014568583s to provisionDockerMachine
	I1025 18:46:46.233065    4810 start.go:293] postStartSetup for "stopped-upgrade-473000" (driver="qemu2")
	I1025 18:46:46.233072    4810 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 18:46:46.233139    4810 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 18:46:46.233148    4810 sshutil.go:53] new ssh client: &{IP:localhost Port:62508 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/id_rsa Username:docker}
	I1025 18:46:46.276589    4810 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 18:46:46.278167    4810 info.go:137] Remote host: Buildroot 2021.02.12
	I1025 18:46:46.278175    4810 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19868-1112/.minikube/addons for local assets ...
	I1025 18:46:46.278276    4810 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19868-1112/.minikube/files for local assets ...
	I1025 18:46:46.278425    4810 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19868-1112/.minikube/files/etc/ssl/certs/16722.pem -> 16722.pem in /etc/ssl/certs
	I1025 18:46:46.278592    4810 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 18:46:46.281483    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/files/etc/ssl/certs/16722.pem --> /etc/ssl/certs/16722.pem (1708 bytes)
	I1025 18:46:46.288199    4810 start.go:296] duration metric: took 55.129125ms for postStartSetup
	I1025 18:46:46.288215    4810 fix.go:56] duration metric: took 21.028823458s for fixHost
	I1025 18:46:46.288266    4810 main.go:141] libmachine: Using SSH client type: native
	I1025 18:46:46.288364    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033f65f0] 0x1033f8e30 <nil>  [] 0s} localhost 62508 <nil> <nil>}
	I1025 18:46:46.288375    4810 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 18:46:46.364259    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729907206.240625462
	
	I1025 18:46:46.364268    4810 fix.go:216] guest clock: 1729907206.240625462
	I1025 18:46:46.364272    4810 fix.go:229] Guest: 2024-10-25 18:46:46.240625462 -0700 PDT Remote: 2024-10-25 18:46:46.288216 -0700 PDT m=+21.150621210 (delta=-47.590538ms)
	I1025 18:46:46.364284    4810 fix.go:200] guest clock delta is within tolerance: -47.590538ms
	I1025 18:46:46.364287    4810 start.go:83] releasing machines lock for "stopped-upgrade-473000", held for 21.104904875s
	I1025 18:46:46.364371    4810 ssh_runner.go:195] Run: cat /version.json
	I1025 18:46:46.364381    4810 sshutil.go:53] new ssh client: &{IP:localhost Port:62508 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/id_rsa Username:docker}
	I1025 18:46:46.364390    4810 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 18:46:46.364408    4810 sshutil.go:53] new ssh client: &{IP:localhost Port:62508 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/id_rsa Username:docker}
	W1025 18:46:46.364892    4810 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:62652->127.0.0.1:62508: write: broken pipe
	I1025 18:46:46.364913    4810 retry.go:31] will retry after 308.163959ms: ssh: handshake failed: write tcp 127.0.0.1:62652->127.0.0.1:62508: write: broken pipe
	W1025 18:46:46.718938    4810 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1025 18:46:46.719056    4810 ssh_runner.go:195] Run: systemctl --version
	I1025 18:46:46.723161    4810 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 18:46:46.725718    4810 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 18:46:46.725783    4810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1025 18:46:46.730165    4810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1025 18:46:46.736270    4810 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 18:46:46.736280    4810 start.go:495] detecting cgroup driver to use...
	I1025 18:46:46.736385    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:46:46.744612    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1025 18:46:46.748054    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 18:46:46.751521    4810 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 18:46:46.751550    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 18:46:46.754927    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:46:46.758335    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 18:46:46.761172    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:46:46.763977    4810 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 18:46:46.767281    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 18:46:46.770615    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1025 18:46:46.773618    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1025 18:46:46.776470    4810 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 18:46:46.779513    4810 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 18:46:46.782626    4810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:46:46.870978    4810 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 18:46:46.877138    4810 start.go:495] detecting cgroup driver to use...
	I1025 18:46:46.877223    4810 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 18:46:46.883073    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 18:46:46.888097    4810 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 18:46:46.894779    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 18:46:46.899366    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 18:46:46.904197    4810 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1025 18:46:46.951438    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 18:46:46.956694    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:46:46.962060    4810 ssh_runner.go:195] Run: which cri-dockerd
	I1025 18:46:46.963255    4810 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 18:46:46.966333    4810 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 18:46:46.971411    4810 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 18:46:47.050445    4810 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 18:46:47.113608    4810 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 18:46:47.113678    4810 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 18:46:47.119111    4810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:46:47.197395    4810 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:46:48.345536    4810 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.148147583s)
	I1025 18:46:48.345616    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1025 18:46:48.350163    4810 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1025 18:46:48.356351    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1025 18:46:48.361341    4810 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 18:46:48.438941    4810 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:46:48.512687    4810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:46:48.577704    4810 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 18:46:48.584255    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1025 18:46:48.589347    4810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:46:48.654206    4810 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1025 18:46:48.692723    4810 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 18:46:48.693540    4810 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 18:46:48.695432    4810 start.go:563] Will wait 60s for crictl version
	I1025 18:46:48.695478    4810 ssh_runner.go:195] Run: which crictl
	I1025 18:46:48.696693    4810 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 18:46:48.712080    4810 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1025 18:46:48.712157    4810 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:46:48.729662    4810 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:46:48.749877    4810 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1025 18:46:48.749962    4810 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1025 18:46:48.751188    4810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:46:48.754976    4810 kubeadm.go:883] updating cluster {Name:stopped-upgrade-473000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62543 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-473000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1025 18:46:48.755024    4810 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1025 18:46:48.755072    4810 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:46:48.765230    4810 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 18:46:48.765238    4810 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1025 18:46:48.765295    4810 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 18:46:48.768368    4810 ssh_runner.go:195] Run: which lz4
	I1025 18:46:48.769665    4810 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 18:46:48.770944    4810 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 18:46:48.770955    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1025 18:46:49.781534    4810 docker.go:653] duration metric: took 1.011945042s to copy over tarball
	I1025 18:46:49.781624    4810 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 18:46:49.362352    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:46:49.362461    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:46:49.374364    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:46:49.374452    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:46:49.389135    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:46:49.389228    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:46:49.401588    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:46:49.401678    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:46:49.414365    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:46:49.414457    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:46:49.426283    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:46:49.426362    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:46:49.438330    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:46:49.438411    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:46:49.450017    4599 logs.go:282] 0 containers: []
	W1025 18:46:49.450028    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:46:49.450098    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:46:49.462183    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:46:49.462205    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:46:49.462211    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:46:49.501982    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:46:49.501999    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:46:49.540691    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:46:49.540704    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:46:49.556710    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:46:49.556726    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:46:49.569939    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:46:49.569951    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:46:49.589142    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:46:49.589157    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:46:49.604543    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:46:49.604557    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:46:49.621217    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:46:49.621229    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:46:49.635577    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:46:49.635590    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:46:49.649965    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:46:49.649978    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:46:49.663051    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:46:49.663065    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:46:49.682652    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:46:49.682667    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:46:49.696400    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:46:49.696413    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:46:49.709251    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:46:49.709263    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:46:49.734655    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:46:49.734670    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:46:49.740068    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:46:49.740079    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:46:49.755727    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:46:49.755740    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:46:50.973874    4810 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.192259916s)
	I1025 18:46:50.973889    4810 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 18:46:50.989872    4810 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 18:46:50.992768    4810 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1025 18:46:50.997689    4810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:46:51.083002    4810 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:46:52.594428    4810 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.511442458s)
	I1025 18:46:52.594533    4810 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:46:52.605339    4810 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 18:46:52.605349    4810 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1025 18:46:52.605353    4810 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 18:46:52.610415    4810 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:46:52.612114    4810 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1025 18:46:52.613730    4810 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1025 18:46:52.614231    4810 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:46:52.616402    4810 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1025 18:46:52.616508    4810 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 18:46:52.617867    4810 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1025 18:46:52.618170    4810 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1025 18:46:52.619337    4810 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1025 18:46:52.619449    4810 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 18:46:52.620397    4810 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1025 18:46:52.620979    4810 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1025 18:46:52.621737    4810 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1025 18:46:52.622074    4810 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 18:46:52.622875    4810 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1025 18:46:52.623907    4810 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 18:46:53.214063    4810 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1025 18:46:53.214117    4810 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1025 18:46:53.234269    4810 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1025 18:46:53.234314    4810 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1025 18:46:53.234390    4810 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1025 18:46:53.234967    4810 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1025 18:46:53.234983    4810 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1025 18:46:53.235021    4810 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1025 18:46:53.246219    4810 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1025 18:46:53.249170    4810 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1025 18:46:53.268166    4810 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1025 18:46:53.271082    4810 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 18:46:53.279010    4810 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1025 18:46:53.279033    4810 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1025 18:46:53.279100    4810 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1025 18:46:53.294433    4810 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1025 18:46:53.294565    4810 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1025 18:46:53.294581    4810 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 18:46:53.294639    4810 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 18:46:53.304190    4810 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1025 18:46:53.362442    4810 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1025 18:46:53.373836    4810 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1025 18:46:53.373858    4810 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1025 18:46:53.373918    4810 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1025 18:46:53.383808    4810 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1025 18:46:53.386799    4810 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1025 18:46:53.396709    4810 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1025 18:46:53.396732    4810 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1025 18:46:53.396792    4810 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1025 18:46:53.406729    4810 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1025 18:46:53.406863    4810 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1025 18:46:53.408383    4810 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1025 18:46:53.408394    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1025 18:46:53.416692    4810 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1025 18:46:53.416701    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1025 18:46:53.441979    4810 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1025 18:46:53.465930    4810 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1025 18:46:53.466083    4810 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1025 18:46:53.476440    4810 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1025 18:46:53.476462    4810 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 18:46:53.476537    4810 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1025 18:46:53.486423    4810 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1025 18:46:53.486588    4810 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1025 18:46:53.488066    4810 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1025 18:46:53.488077    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1025 18:46:53.527640    4810 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1025 18:46:53.527653    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1025 18:46:53.566988    4810 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W1025 18:46:53.568441    4810 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1025 18:46:53.568551    4810 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:46:53.579843    4810 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1025 18:46:53.579868    4810 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:46:53.579931    4810 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:46:53.593675    4810 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1025 18:46:53.593814    4810 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1025 18:46:53.595286    4810 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1025 18:46:53.595301    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1025 18:46:53.629005    4810 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1025 18:46:53.629019    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1025 18:46:53.871882    4810 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1025 18:46:53.871919    4810 cache_images.go:92] duration metric: took 1.266585334s to LoadCachedImages
	W1025 18:46:53.871960    4810 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1025 18:46:53.871965    4810 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1025 18:46:53.872023    4810 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-473000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-473000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 18:46:53.872097    4810 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 18:46:53.885948    4810 cni.go:84] Creating CNI manager for ""
	I1025 18:46:53.885969    4810 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:46:53.885978    4810 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1025 18:46:53.885991    4810 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-473000 NodeName:stopped-upgrade-473000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 18:46:53.886066    4810 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-473000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 18:46:53.886140    4810 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1025 18:46:53.889048    4810 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 18:46:53.889086    4810 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 18:46:53.891664    4810 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1025 18:46:53.896953    4810 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 18:46:53.902093    4810 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1025 18:46:53.907589    4810 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1025 18:46:53.908881    4810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:46:53.912316    4810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:46:53.993496    4810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 18:46:54.001155    4810 certs.go:68] Setting up /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000 for IP: 10.0.2.15
	I1025 18:46:54.001168    4810 certs.go:194] generating shared ca certs ...
	I1025 18:46:54.001176    4810 certs.go:226] acquiring lock for ca certs: {Name:mk4d96eff7eec2b0b424f4d9808345f1ae37fa52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:46:54.001372    4810 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.key
	I1025 18:46:54.002156    4810 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/proxy-client-ca.key
	I1025 18:46:54.002168    4810 certs.go:256] generating profile certs ...
	I1025 18:46:54.002457    4810 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/client.key
	I1025 18:46:54.002480    4810 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.key.3dec5c91
	I1025 18:46:54.002496    4810 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.crt.3dec5c91 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1025 18:46:54.053224    4810 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.crt.3dec5c91 ...
	I1025 18:46:54.053237    4810 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.crt.3dec5c91: {Name:mk05743962903270bdc048d28ab3d3d2206b4886 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:46:54.053528    4810 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.key.3dec5c91 ...
	I1025 18:46:54.053533    4810 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.key.3dec5c91: {Name:mk7ebfc7b0c4a484c3f5b41bb12ac54c0b953481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:46:54.053689    4810 certs.go:381] copying /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.crt.3dec5c91 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.crt
	I1025 18:46:54.053823    4810 certs.go:385] copying /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.key.3dec5c91 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.key
	I1025 18:46:54.054119    4810 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/proxy-client.key
	I1025 18:46:54.054290    4810 certs.go:484] found cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/1672.pem (1338 bytes)
	W1025 18:46:54.054436    4810 certs.go:480] ignoring /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/1672_empty.pem, impossibly tiny 0 bytes
	I1025 18:46:54.054442    4810 certs.go:484] found cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 18:46:54.054466    4810 certs.go:484] found cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem (1082 bytes)
	I1025 18:46:54.054486    4810 certs.go:484] found cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem (1123 bytes)
	I1025 18:46:54.054507    4810 certs.go:484] found cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/key.pem (1675 bytes)
	I1025 18:46:54.054551    4810 certs.go:484] found cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/files/etc/ssl/certs/16722.pem (1708 bytes)
	I1025 18:46:54.054888    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 18:46:54.062438    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 18:46:54.069227    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 18:46:54.075820    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 18:46:54.083419    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 18:46:54.090782    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 18:46:54.097966    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 18:46:54.104743    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 18:46:54.111400    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/files/etc/ssl/certs/16722.pem --> /usr/share/ca-certificates/16722.pem (1708 bytes)
	I1025 18:46:54.118069    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 18:46:54.124983    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/1672.pem --> /usr/share/ca-certificates/1672.pem (1338 bytes)
	I1025 18:46:54.131668    4810 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 18:46:54.137401    4810 ssh_runner.go:195] Run: openssl version
	I1025 18:46:54.139492    4810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16722.pem && ln -fs /usr/share/ca-certificates/16722.pem /etc/ssl/certs/16722.pem"
	I1025 18:46:54.143208    4810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16722.pem
	I1025 18:46:54.144603    4810 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:50 /usr/share/ca-certificates/16722.pem
	I1025 18:46:54.144639    4810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16722.pem
	I1025 18:46:54.146285    4810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16722.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 18:46:54.148993    4810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 18:46:54.151863    4810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:46:54.153350    4810 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:43 /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:46:54.153379    4810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:46:54.155183    4810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 18:46:54.158562    4810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1672.pem && ln -fs /usr/share/ca-certificates/1672.pem /etc/ssl/certs/1672.pem"
	I1025 18:46:54.161527    4810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672.pem
	I1025 18:46:54.162768    4810 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:50 /usr/share/ca-certificates/1672.pem
	I1025 18:46:54.162798    4810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672.pem
	I1025 18:46:54.164661    4810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1672.pem /etc/ssl/certs/51391683.0"
	I1025 18:46:54.167748    4810 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 18:46:54.169114    4810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 18:46:54.171098    4810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 18:46:54.173075    4810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 18:46:54.174968    4810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 18:46:54.177057    4810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 18:46:54.178730    4810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 18:46:54.180459    4810 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-473000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62543 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-473000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1025 18:46:54.180537    4810 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:46:54.190309    4810 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 18:46:54.193241    4810 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1025 18:46:54.193250    4810 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1025 18:46:54.193284    4810 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 18:46:54.195972    4810 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:46:54.196425    4810 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-473000" does not appear in /Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:46:54.196541    4810 kubeconfig.go:62] /Users/jenkins/minikube-integration/19868-1112/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-473000" cluster setting kubeconfig missing "stopped-upgrade-473000" context setting]
	I1025 18:46:54.196771    4810 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/kubeconfig: {Name:mk88d1ac601cc80b64027f8557b82969027e8e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:46:54.197230    4810 kapi.go:59] client config for stopped-upgrade-473000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/client.key", CAFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104e52680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:46:54.197733    4810 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 18:46:54.200290    4810 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-473000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1025 18:46:54.200295    4810 kubeadm.go:1160] stopping kube-system containers ...
	I1025 18:46:54.200335    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:46:54.211060    4810 docker.go:483] Stopping containers: [ca283390d210 451202c4a948 11b566bdf60e 5f90e347d427 1b9369654c64 bf8dc2f49a56 dfa41dbae324 6c4b901e85f8]
	I1025 18:46:54.211132    4810 ssh_runner.go:195] Run: docker stop ca283390d210 451202c4a948 11b566bdf60e 5f90e347d427 1b9369654c64 bf8dc2f49a56 dfa41dbae324 6c4b901e85f8
	I1025 18:46:54.225032    4810 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 18:46:54.230560    4810 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:46:54.233398    4810 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:46:54.233404    4810 kubeadm.go:157] found existing configuration files:
	
	I1025 18:46:54.233435    4810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/admin.conf
	I1025 18:46:54.235890    4810 kubeadm.go:163] "https://control-plane.minikube.internal:62543" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 18:46:54.235921    4810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 18:46:54.238881    4810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/kubelet.conf
	I1025 18:46:54.241618    4810 kubeadm.go:163] "https://control-plane.minikube.internal:62543" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 18:46:54.241654    4810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 18:46:54.244233    4810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/controller-manager.conf
	I1025 18:46:54.246956    4810 kubeadm.go:163] "https://control-plane.minikube.internal:62543" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 18:46:54.246981    4810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 18:46:54.249869    4810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/scheduler.conf
	I1025 18:46:54.252331    4810 kubeadm.go:163] "https://control-plane.minikube.internal:62543" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 18:46:54.252358    4810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 18:46:54.255374    4810 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:46:54.258577    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:46:54.281272    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:46:54.680591    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:46:54.808945    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:46:54.838541    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:46:54.861209    4810 api_server.go:52] waiting for apiserver process to appear ...
	I1025 18:46:54.861299    4810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:46:52.270429    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:46:55.363461    4810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:46:55.863164    4810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:46:55.867207    4810 api_server.go:72] duration metric: took 1.006020333s to wait for apiserver process to appear ...
	I1025 18:46:55.867216    4810 api_server.go:88] waiting for apiserver healthz status ...
	I1025 18:46:55.867229    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:46:57.272725    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:46:57.273270    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:46:57.314155    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:46:57.314317    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:46:57.336815    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:46:57.336947    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:46:57.352259    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:46:57.352352    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:46:57.365427    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:46:57.365510    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:46:57.376647    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:46:57.376730    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:46:57.387518    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:46:57.387594    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:46:57.398039    4599 logs.go:282] 0 containers: []
	W1025 18:46:57.398052    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:46:57.398109    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:46:57.411995    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:46:57.412015    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:46:57.412021    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:46:57.426348    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:46:57.426361    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:46:57.440423    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:46:57.440433    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:46:57.454755    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:46:57.454767    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:46:57.466624    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:46:57.466633    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:46:57.505487    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:46:57.505497    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:46:57.509884    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:46:57.509892    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:46:57.521805    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:46:57.521817    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:46:57.535099    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:46:57.535112    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:46:57.547164    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:46:57.547174    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:46:57.558746    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:46:57.558758    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:46:57.570016    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:46:57.570027    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:46:57.593187    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:46:57.593196    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:46:57.628591    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:46:57.628603    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:46:57.643792    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:46:57.643806    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:46:57.661536    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:46:57.661545    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:46:57.673850    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:46:57.673863    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:47:00.187230    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:00.869301    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:00.869404    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:05.977435    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:05.977487    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:05.295137    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:05.295313    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:47:05.308966    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:47:05.309058    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:47:05.320873    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:47:05.320952    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:47:05.331429    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:47:05.331511    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:47:05.342223    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:47:05.342305    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:47:05.352795    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:47:05.352868    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:47:05.363238    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:47:05.363318    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:47:05.373518    4599 logs.go:282] 0 containers: []
	W1025 18:47:05.373531    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:47:05.373596    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:47:05.384345    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:47:05.384364    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:47:05.384369    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:47:05.399075    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:47:05.399085    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:47:05.410329    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:47:05.410338    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:47:05.445880    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:47:05.445894    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:47:05.458610    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:47:05.458624    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:47:05.473618    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:47:05.473631    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:47:05.485191    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:47:05.485205    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:47:05.496355    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:47:05.496367    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:47:05.520048    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:47:05.520058    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:47:05.532003    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:47:05.532013    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:47:05.546385    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:47:05.546398    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:47:05.558031    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:47:05.558045    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:47:05.569122    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:47:05.569135    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:47:05.580982    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:47:05.580992    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:47:05.597700    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:47:05.597713    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:47:05.633923    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:47:05.633933    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:47:05.637942    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:47:05.637950    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:47:08.153594    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:10.978624    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:10.978731    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:13.154169    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:13.154521    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:47:13.190253    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:47:13.190413    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:47:13.211025    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:47:13.211135    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:47:13.225731    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:47:13.225811    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:47:13.238479    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:47:13.238550    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:47:13.249171    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:47:13.249251    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:47:13.259480    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:47:13.259562    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:47:13.269130    4599 logs.go:282] 0 containers: []
	W1025 18:47:13.269140    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:47:13.269196    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:47:13.280005    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:47:13.280023    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:47:13.280028    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:47:13.294717    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:47:13.294731    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:47:13.305880    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:47:13.305895    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:47:13.319640    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:47:13.319650    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:47:13.335163    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:47:13.335177    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:47:13.373386    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:47:13.373397    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:47:13.409323    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:47:13.409338    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:47:13.420833    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:47:13.420844    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:47:13.444858    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:47:13.444868    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:47:13.459471    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:47:13.459482    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:47:13.471130    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:47:13.471143    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:47:13.482759    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:47:13.482772    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:47:13.495161    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:47:13.495173    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:47:13.499299    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:47:13.499308    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:47:13.511765    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:47:13.511778    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:47:13.526347    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:47:13.526360    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:47:13.541021    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:47:13.541032    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:47:15.980347    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:15.980444    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:16.060724    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:20.982470    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:20.982571    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:21.063508    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:21.063852    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:47:21.092207    4599 logs.go:282] 2 containers: [49990bfd759a 780c0da270ff]
	I1025 18:47:21.092359    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:47:21.110508    4599 logs.go:282] 2 containers: [737a792d17b1 35f00200f8b4]
	I1025 18:47:21.110611    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:47:21.124112    4599 logs.go:282] 1 containers: [21491f6302d4]
	I1025 18:47:21.124200    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:47:21.138061    4599 logs.go:282] 2 containers: [080db06b1ea7 3fbb4d7c4f18]
	I1025 18:47:21.138145    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:47:21.153682    4599 logs.go:282] 1 containers: [7ea55a8b2c0d]
	I1025 18:47:21.153762    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:47:21.164232    4599 logs.go:282] 2 containers: [63ace10533b8 34a4e0843139]
	I1025 18:47:21.164300    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:47:21.174652    4599 logs.go:282] 0 containers: []
	W1025 18:47:21.174663    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:47:21.174747    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:47:21.203592    4599 logs.go:282] 2 containers: [270403839b88 da65d784675f]
	I1025 18:47:21.203610    4599 logs.go:123] Gathering logs for storage-provisioner [da65d784675f] ...
	I1025 18:47:21.203616    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 da65d784675f"
	I1025 18:47:21.217297    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:47:21.217307    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:47:21.239412    4599 logs.go:123] Gathering logs for kube-apiserver [49990bfd759a] ...
	I1025 18:47:21.239421    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49990bfd759a"
	I1025 18:47:21.253408    4599 logs.go:123] Gathering logs for kube-apiserver [780c0da270ff] ...
	I1025 18:47:21.253420    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 780c0da270ff"
	I1025 18:47:21.266234    4599 logs.go:123] Gathering logs for kube-proxy [7ea55a8b2c0d] ...
	I1025 18:47:21.266247    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ea55a8b2c0d"
	I1025 18:47:21.278260    4599 logs.go:123] Gathering logs for kube-controller-manager [63ace10533b8] ...
	I1025 18:47:21.278271    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63ace10533b8"
	I1025 18:47:21.296657    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:47:21.296671    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:47:21.334776    4599 logs.go:123] Gathering logs for etcd [35f00200f8b4] ...
	I1025 18:47:21.334788    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35f00200f8b4"
	I1025 18:47:21.349706    4599 logs.go:123] Gathering logs for kube-scheduler [080db06b1ea7] ...
	I1025 18:47:21.349720    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 080db06b1ea7"
	I1025 18:47:21.361173    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:47:21.361188    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:47:21.377367    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:47:21.377383    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:47:21.382169    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:47:21.382178    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:47:21.420893    4599 logs.go:123] Gathering logs for etcd [737a792d17b1] ...
	I1025 18:47:21.420907    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 737a792d17b1"
	I1025 18:47:21.435233    4599 logs.go:123] Gathering logs for storage-provisioner [270403839b88] ...
	I1025 18:47:21.435247    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 270403839b88"
	I1025 18:47:21.446180    4599 logs.go:123] Gathering logs for coredns [21491f6302d4] ...
	I1025 18:47:21.446189    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21491f6302d4"
	I1025 18:47:21.457119    4599 logs.go:123] Gathering logs for kube-scheduler [3fbb4d7c4f18] ...
	I1025 18:47:21.457133    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3fbb4d7c4f18"
	I1025 18:47:21.471958    4599 logs.go:123] Gathering logs for kube-controller-manager [34a4e0843139] ...
	I1025 18:47:21.471971    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 34a4e0843139"
	I1025 18:47:23.984702    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:28.987656    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:28.987844    4599 kubeadm.go:597] duration metric: took 4m4.0017965s to restartPrimaryControlPlane
	W1025 18:47:28.988007    4599 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1025 18:47:28.988077    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1025 18:47:30.000305    4599 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.012188459s)
	I1025 18:47:30.000538    4599 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:47:30.005558    4599 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:47:30.008528    4599 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:47:30.011279    4599 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:47:30.011284    4599 kubeadm.go:157] found existing configuration files:
	
	I1025 18:47:30.011311    4599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/admin.conf
	I1025 18:47:30.013754    4599 kubeadm.go:163] "https://control-plane.minikube.internal:62322" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 18:47:30.013782    4599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 18:47:30.016896    4599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/kubelet.conf
	I1025 18:47:30.019822    4599 kubeadm.go:163] "https://control-plane.minikube.internal:62322" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 18:47:30.019857    4599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 18:47:30.022369    4599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/controller-manager.conf
	I1025 18:47:30.025357    4599 kubeadm.go:163] "https://control-plane.minikube.internal:62322" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 18:47:30.025384    4599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 18:47:30.028807    4599 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/scheduler.conf
	I1025 18:47:30.031663    4599 kubeadm.go:163] "https://control-plane.minikube.internal:62322" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:62322 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 18:47:30.031691    4599 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 18:47:30.034251    4599 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 18:47:30.050272    4599 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1025 18:47:30.050299    4599 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 18:47:30.107050    4599 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 18:47:30.107113    4599 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 18:47:30.107179    4599 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 18:47:30.157221    4599 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 18:47:25.983845    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:25.983928    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:30.161448    4599 out.go:235]   - Generating certificates and keys ...
	I1025 18:47:30.161481    4599 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 18:47:30.161517    4599 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 18:47:30.161573    4599 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 18:47:30.161617    4599 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1025 18:47:30.161660    4599 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 18:47:30.161687    4599 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1025 18:47:30.161730    4599 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1025 18:47:30.161792    4599 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1025 18:47:30.161853    4599 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 18:47:30.161896    4599 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 18:47:30.161918    4599 kubeadm.go:310] [certs] Using the existing "sa" key
	I1025 18:47:30.161945    4599 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 18:47:30.201637    4599 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 18:47:30.424184    4599 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 18:47:30.473133    4599 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 18:47:30.552461    4599 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 18:47:30.582080    4599 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:47:30.582452    4599 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:47:30.582480    4599 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 18:47:30.670508    4599 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 18:47:30.985438    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:30.985462    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:30.674756    4599 out.go:235]   - Booting up control plane ...
	I1025 18:47:30.674801    4599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 18:47:30.674843    4599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 18:47:30.674882    4599 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 18:47:30.674930    4599 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 18:47:30.675015    4599 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 18:47:35.173699    4599 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502879 seconds
	I1025 18:47:35.173764    4599 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 18:47:35.177594    4599 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 18:47:35.698691    4599 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 18:47:35.699137    4599 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-889000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 18:47:36.203285    4599 kubeadm.go:310] [bootstrap-token] Using token: a0knbh.qb4bjtcmvw8hg9x6
	I1025 18:47:36.209485    4599 out.go:235]   - Configuring RBAC rules ...
	I1025 18:47:36.209555    4599 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 18:47:36.209606    4599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 18:47:36.216292    4599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 18:47:36.217204    4599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 18:47:36.218159    4599 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 18:47:36.218942    4599 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 18:47:36.222048    4599 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 18:47:36.401751    4599 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1025 18:47:36.607445    4599 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1025 18:47:36.607946    4599 kubeadm.go:310] 
	I1025 18:47:36.607975    4599 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1025 18:47:36.607983    4599 kubeadm.go:310] 
	I1025 18:47:36.608026    4599 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1025 18:47:36.608036    4599 kubeadm.go:310] 
	I1025 18:47:36.608052    4599 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1025 18:47:36.608088    4599 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 18:47:36.608117    4599 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 18:47:36.608121    4599 kubeadm.go:310] 
	I1025 18:47:36.608152    4599 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1025 18:47:36.608155    4599 kubeadm.go:310] 
	I1025 18:47:36.608182    4599 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 18:47:36.608185    4599 kubeadm.go:310] 
	I1025 18:47:36.608213    4599 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1025 18:47:36.608256    4599 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 18:47:36.608299    4599 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 18:47:36.608303    4599 kubeadm.go:310] 
	I1025 18:47:36.608354    4599 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 18:47:36.608388    4599 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1025 18:47:36.608393    4599 kubeadm.go:310] 
	I1025 18:47:36.608429    4599 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a0knbh.qb4bjtcmvw8hg9x6 \
	I1025 18:47:36.608488    4599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b9d1b51b46aa29bee5add6dcd2f2839d068831832311340de43d2611a1555cef \
	I1025 18:47:36.608502    4599 kubeadm.go:310] 	--control-plane 
	I1025 18:47:36.608509    4599 kubeadm.go:310] 
	I1025 18:47:36.608574    4599 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1025 18:47:36.608579    4599 kubeadm.go:310] 
	I1025 18:47:36.608627    4599 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a0knbh.qb4bjtcmvw8hg9x6 \
	I1025 18:47:36.608694    4599 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b9d1b51b46aa29bee5add6dcd2f2839d068831832311340de43d2611a1555cef 
	I1025 18:47:36.608759    4599 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:47:36.608766    4599 cni.go:84] Creating CNI manager for ""
	I1025 18:47:36.608773    4599 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:47:36.611368    4599 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 18:47:36.614497    4599 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 18:47:36.617575    4599 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 18:47:36.623113    4599 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 18:47:36.623177    4599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:47:36.623194    4599 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-889000 minikube.k8s.io/updated_at=2024_10_25T18_47_36_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=running-upgrade-889000 minikube.k8s.io/primary=true
	I1025 18:47:36.655576    4599 kubeadm.go:1113] duration metric: took 32.4525ms to wait for elevateKubeSystemPrivileges
	I1025 18:47:36.655609    4599 ops.go:34] apiserver oom_adj: -16
	I1025 18:47:36.666367    4599 kubeadm.go:394] duration metric: took 4m11.693619375s to StartCluster
	I1025 18:47:36.666385    4599 settings.go:142] acquiring lock: {Name:mk3ff32802ddfc6c1e0425afbf853ac78c436759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:47:36.666510    4599 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:47:36.666982    4599 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/kubeconfig: {Name:mk88d1ac601cc80b64027f8557b82969027e8e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:47:36.667188    4599 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:47:36.667221    4599 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 18:47:36.667257    4599 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-889000"
	I1025 18:47:36.667265    4599 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-889000"
	W1025 18:47:36.667271    4599 addons.go:243] addon storage-provisioner should already be in state true
	I1025 18:47:36.667284    4599 host.go:66] Checking if "running-upgrade-889000" exists ...
	I1025 18:47:36.667304    4599 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-889000"
	I1025 18:47:36.667314    4599 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-889000"
	I1025 18:47:36.667389    4599 config.go:182] Loaded profile config "running-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:47:36.668251    4599 kapi.go:59] client config for running-upgrade-889000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/running-upgrade-889000/client.key", CAFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1065ee680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:47:36.668630    4599 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-889000"
	W1025 18:47:36.668637    4599 addons.go:243] addon default-storageclass should already be in state true
	I1025 18:47:36.668644    4599 host.go:66] Checking if "running-upgrade-889000" exists ...
	I1025 18:47:36.671443    4599 out.go:177] * Verifying Kubernetes components...
	I1025 18:47:36.671782    4599 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 18:47:36.675607    4599 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 18:47:36.675613    4599 sshutil.go:53] new ssh client: &{IP:localhost Port:62290 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/running-upgrade-889000/id_rsa Username:docker}
	I1025 18:47:36.681428    4599 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:47:35.987770    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:35.987793    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:36.685475    4599 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:47:36.689422    4599 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:47:36.689429    4599 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 18:47:36.689436    4599 sshutil.go:53] new ssh client: &{IP:localhost Port:62290 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/running-upgrade-889000/id_rsa Username:docker}
	I1025 18:47:36.776192    4599 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 18:47:36.781457    4599 api_server.go:52] waiting for apiserver process to appear ...
	I1025 18:47:36.781518    4599 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:47:36.784056    4599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 18:47:36.787151    4599 api_server.go:72] duration metric: took 119.947375ms to wait for apiserver process to appear ...
	I1025 18:47:36.787159    4599 api_server.go:88] waiting for apiserver healthz status ...
	I1025 18:47:36.787166    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:36.810641    4599 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:47:37.102085    4599 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 18:47:37.102096    4599 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 18:47:40.990144    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:40.990183    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:41.789358    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:41.789383    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:45.992634    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:45.992663    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:46.789721    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:46.789741    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:50.995029    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:50.995049    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:51.790183    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:51.790206    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:55.997399    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:55.997553    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:47:56.009633    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:47:56.009728    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:47:56.020481    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:47:56.020559    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:47:56.031161    4810 logs.go:282] 0 containers: []
	W1025 18:47:56.031184    4810 logs.go:284] No container was found matching "coredns"
	I1025 18:47:56.031248    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:47:56.041718    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:47:56.041808    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:47:56.052150    4810 logs.go:282] 0 containers: []
	W1025 18:47:56.052162    4810 logs.go:284] No container was found matching "kube-proxy"
	I1025 18:47:56.052235    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:47:56.062449    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:47:56.062529    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:47:56.072427    4810 logs.go:282] 0 containers: []
	W1025 18:47:56.072438    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:47:56.072500    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:47:56.081716    4810 logs.go:282] 0 containers: []
	W1025 18:47:56.081729    4810 logs.go:284] No container was found matching "storage-provisioner"
	I1025 18:47:56.081735    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:47:56.081741    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:47:56.094005    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:47:56.094016    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:47:56.107430    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:47:56.107439    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:47:56.122548    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:47:56.122559    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:47:56.151977    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:47:56.151990    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:47:56.170290    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:47:56.170301    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:47:56.182239    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:47:56.182253    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:47:56.186851    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:47:56.186860    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:47:56.291554    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:47:56.291568    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:47:56.307335    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:47:56.307345    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:47:56.325430    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:47:56.325440    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:47:56.351135    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:47:56.351147    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:47:56.382437    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:47:56.382452    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:47:58.899065    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:56.790684    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:56.790713    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:03.901473    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:03.901645    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:03.915101    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:48:03.915175    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:03.926394    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:48:03.926462    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:03.936803    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:48:03.936898    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:03.947534    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:48:03.947619    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:03.957844    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:48:03.957911    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:03.972977    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:48:03.973064    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:03.985600    4810 logs.go:282] 0 containers: []
	W1025 18:48:03.985611    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:03.985681    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:03.996828    4810 logs.go:282] 1 containers: [d67f7969a5df]
	I1025 18:48:03.996845    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:48:03.996851    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:48:04.009890    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:48:04.009901    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:48:04.021131    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:48:04.021142    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:48:04.036191    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:48:04.036200    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:48:04.047641    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:48:04.047651    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:48:04.058557    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:48:04.058570    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:04.070424    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:04.070440    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:04.099875    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:04.099883    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:04.137758    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:48:04.137769    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:48:04.151516    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:48:04.151527    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:48:04.166048    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:48:04.166058    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:48:04.189102    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:48:04.189112    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:48:04.202971    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:04.202983    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:04.207205    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:48:04.207213    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:48:04.225567    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:48:04.225578    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:48:04.242462    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:04.242471    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:01.791320    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:01.791382    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:06.792132    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:06.792148    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1025 18:48:07.103181    4599 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1025 18:48:07.107651    4599 out.go:177] * Enabled addons: storage-provisioner
	I1025 18:48:06.770676    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:07.115605    4599 addons.go:510] duration metric: took 30.4476695s for enable addons: enabled=[storage-provisioner]
	I1025 18:48:11.773141    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:11.773417    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:11.799433    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:48:11.799556    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:11.816344    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:48:11.816443    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:11.828968    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:48:11.829049    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:11.840184    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:48:11.840268    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:11.850536    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:48:11.850614    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:11.861075    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:48:11.861149    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:11.871554    4810 logs.go:282] 0 containers: []
	W1025 18:48:11.871568    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:11.871639    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:11.881692    4810 logs.go:282] 1 containers: [d67f7969a5df]
	I1025 18:48:11.881709    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:48:11.881714    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:48:11.892663    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:48:11.892675    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:11.906013    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:48:11.906026    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:48:11.929857    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:48:11.929872    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:48:11.949375    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:48:11.949386    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:48:11.963794    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:11.963804    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:11.989388    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:48:11.989397    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:48:12.003020    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:48:12.003031    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:48:12.020759    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:48:12.020771    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:48:12.032092    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:48:12.032103    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:48:12.044120    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:48:12.044135    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:48:12.060416    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:12.060426    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:12.095950    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:48:12.095961    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:48:12.109720    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:48:12.109732    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:48:12.127153    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:12.127167    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:12.155415    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:12.155423    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:14.661915    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:11.792987    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:11.793042    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:19.664417    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:19.664619    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:19.683921    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:48:19.684011    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:19.711475    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:48:19.711557    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:19.733555    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:48:19.733639    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:19.744202    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:48:19.744284    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:19.756578    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:48:19.756655    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:19.767008    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:48:19.767078    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:19.776945    4810 logs.go:282] 0 containers: []
	W1025 18:48:19.776957    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:19.777034    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:19.787584    4810 logs.go:282] 1 containers: [d67f7969a5df]
	I1025 18:48:19.787602    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:48:19.787608    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:48:19.798869    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:48:19.798879    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:48:19.821917    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:48:19.821928    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:48:19.844441    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:48:19.844455    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:48:19.856140    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:48:19.856152    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:48:19.874140    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:48:19.874150    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:48:19.889161    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:48:19.889177    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:48:19.904857    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:48:19.904870    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:48:19.922549    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:19.922559    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:19.949509    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:48:19.949517    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:19.961344    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:48:19.961354    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:48:19.974940    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:19.974952    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:19.979399    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:19.979409    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:20.020890    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:48:20.020901    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:48:20.035494    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:48:20.035504    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:48:20.053045    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:20.053059    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:16.793963    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:16.793985    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:22.585026    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:21.795292    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:21.795332    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:27.587802    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:27.588101    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:27.612844    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:48:27.612954    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:27.629022    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:48:27.629115    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:27.642155    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:48:27.642235    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:27.653490    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:48:27.653567    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:27.663828    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:48:27.663909    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:27.675978    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:48:27.676060    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:27.686077    4810 logs.go:282] 0 containers: []
	W1025 18:48:27.686093    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:27.686163    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:27.696304    4810 logs.go:282] 1 containers: [d67f7969a5df]
	I1025 18:48:27.696322    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:48:27.696328    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:48:27.707337    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:48:27.707349    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:48:27.718766    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:48:27.718778    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:48:27.733998    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:27.734011    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:27.761049    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:48:27.761057    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:48:27.775007    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:48:27.775023    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:48:27.792014    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:48:27.792024    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:48:27.806081    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:48:27.806091    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:48:27.830202    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:48:27.830212    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:48:27.847273    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:48:27.847284    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:27.860331    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:27.860346    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:27.896357    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:48:27.896368    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:48:27.910185    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:48:27.910198    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:48:27.924278    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:48:27.924291    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:48:27.936083    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:27.936096    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:27.964015    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:27.964023    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:26.797017    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:26.797051    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:30.469556    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:31.799152    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:31.799197    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:35.472090    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:35.472494    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:35.499817    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:48:35.499961    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:35.519859    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:48:35.519948    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:35.533501    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:48:35.533591    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:35.544808    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:48:35.544886    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:35.555348    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:48:35.555422    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:35.565854    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:48:35.565922    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:35.575944    4810 logs.go:282] 0 containers: []
	W1025 18:48:35.575956    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:35.576019    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:35.586414    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:48:35.586435    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:48:35.586443    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:48:35.600103    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:48:35.600115    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:48:35.611183    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:48:35.611197    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:48:35.622645    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:48:35.622657    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:35.635060    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:48:35.635074    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:48:35.648635    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:48:35.648648    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:48:35.660004    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:48:35.660015    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:48:35.682903    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:35.682914    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:35.708897    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:48:35.708903    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:48:35.730065    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:48:35.730076    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:48:35.749579    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:48:35.749589    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:48:35.760854    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:48:35.760866    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:48:35.778428    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:48:35.778443    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:48:35.797871    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:35.797885    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:35.828260    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:35.828268    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:35.862326    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:48:35.862341    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:48:35.875369    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:35.875379    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:38.381717    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:36.801610    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:36.801707    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:36.813665    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:48:36.813742    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:36.823991    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:48:36.824074    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:36.834239    4599 logs.go:282] 2 containers: [60d180d33f33 6b3f7166e29d]
	I1025 18:48:36.834320    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:36.844640    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:48:36.844716    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:36.855581    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:48:36.855669    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:36.866769    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:48:36.866839    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:36.877175    4599 logs.go:282] 0 containers: []
	W1025 18:48:36.877187    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:36.877261    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:36.887819    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:48:36.887834    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:48:36.887840    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:48:36.902623    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:48:36.902634    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:48:36.914622    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:36.914638    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:36.939623    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:48:36.939632    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:36.952153    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:36.952163    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:36.987113    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:48:36.987122    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:48:37.002288    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:48:37.002304    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:48:37.014051    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:48:37.014064    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:48:37.026050    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:48:37.026062    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:48:37.043350    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:48:37.043359    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:48:37.054975    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:37.054989    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:37.059433    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:37.059438    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:37.094344    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:48:37.094359    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:48:39.613229    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:43.384213    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:43.384453    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:43.407822    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:48:43.407955    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:43.424556    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:48:43.424659    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:43.438052    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:48:43.438136    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:43.449383    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:48:43.449467    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:43.459948    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:48:43.460019    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:43.474747    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:48:43.474814    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:43.485313    4810 logs.go:282] 0 containers: []
	W1025 18:48:43.485326    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:43.485381    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:43.496085    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:48:43.496103    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:48:43.496108    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:48:43.519400    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:48:43.519410    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:48:43.534231    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:48:43.534242    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:48:43.552093    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:43.552105    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:43.581036    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:48:43.581047    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:48:43.597850    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:48:43.597860    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:48:43.615214    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:43.615224    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:43.639774    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:48:43.639781    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:43.655795    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:43.655809    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:43.691963    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:48:43.691974    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:48:43.708883    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:48:43.708894    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:48:43.721393    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:48:43.721405    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:48:43.733063    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:48:43.733075    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:48:43.745822    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:48:43.745835    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:48:43.757540    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:43.757551    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:43.763573    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:48:43.763581    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:48:43.777437    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:48:43.777452    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:48:44.615802    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:44.616083    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:44.643335    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:48:44.643474    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:44.659653    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:48:44.659740    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:44.672902    4599 logs.go:282] 2 containers: [60d180d33f33 6b3f7166e29d]
	I1025 18:48:44.672984    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:44.684032    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:48:44.684111    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:44.694889    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:48:44.694977    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:44.705362    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:48:44.705430    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:44.715330    4599 logs.go:282] 0 containers: []
	W1025 18:48:44.715344    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:44.715403    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:44.728351    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:48:44.728365    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:48:44.728371    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:48:44.740417    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:48:44.740428    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:48:44.755681    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:48:44.755693    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:48:44.773428    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:48:44.773439    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:44.784583    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:44.784594    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:44.789494    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:44.789504    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:44.825280    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:48:44.825292    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:48:44.839794    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:48:44.839805    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:48:44.851645    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:48:44.851657    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:48:44.863265    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:44.863275    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:44.888953    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:44.888967    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:44.925428    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:48:44.925442    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:48:44.939761    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:48:44.939771    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:48:46.293885    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:47.453614    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:51.296655    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:51.296841    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:51.311707    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:48:51.311805    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:51.323196    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:48:51.323271    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:51.333907    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:48:51.333990    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:51.345112    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:48:51.345190    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:51.356045    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:48:51.356126    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:51.366635    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:48:51.366712    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:51.380020    4810 logs.go:282] 0 containers: []
	W1025 18:48:51.380037    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:51.380112    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:51.390491    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:48:51.390517    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:51.390522    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:51.394641    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:48:51.394648    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:48:51.408524    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:48:51.408534    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:48:51.426340    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:48:51.426351    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:48:51.441283    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:48:51.441297    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:48:51.452749    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:51.452760    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:51.477343    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:51.477356    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:51.506004    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:48:51.506012    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:48:51.519001    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:48:51.519014    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:48:51.541822    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:48:51.541833    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:48:51.553223    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:48:51.553233    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:48:51.564424    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:51.564435    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:51.603078    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:48:51.603094    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:51.615400    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:48:51.615414    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:48:51.629440    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:48:51.629452    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:48:51.646772    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:48:51.646783    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:48:51.664233    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:48:51.664243    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:48:54.180651    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:52.456605    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:52.457100    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:52.501427    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:48:52.501583    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:52.521087    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:48:52.521200    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:52.534458    4599 logs.go:282] 2 containers: [60d180d33f33 6b3f7166e29d]
	I1025 18:48:52.534536    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:52.546847    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:48:52.546925    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:52.557851    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:48:52.557926    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:52.568578    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:48:52.568663    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:52.579602    4599 logs.go:282] 0 containers: []
	W1025 18:48:52.579613    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:52.579688    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:52.590421    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:48:52.590436    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:52.590442    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:52.615554    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:52.615564    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:52.650955    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:48:52.650966    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:48:52.665518    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:48:52.665530    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:48:52.679675    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:48:52.679685    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:48:52.691418    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:48:52.691429    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:48:52.702585    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:48:52.702595    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:48:52.720298    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:48:52.720309    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:52.732689    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:52.732699    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:52.737303    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:52.737310    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:52.771892    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:48:52.771903    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:48:52.784229    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:48:52.784240    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:48:52.799570    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:48:52.799582    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:48:59.183172    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:59.183425    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:59.205547    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:48:59.205683    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:59.221593    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:48:59.221680    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:59.233666    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:48:59.233749    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:59.244791    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:48:59.244880    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:59.255290    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:48:59.255371    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:59.266035    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:48:59.266110    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:59.275887    4810 logs.go:282] 0 containers: []
	W1025 18:48:59.275900    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:59.275962    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:59.286566    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:48:59.286591    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:48:59.286597    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:48:59.305486    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:48:59.305497    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:48:59.319423    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:48:59.319434    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:48:59.330537    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:59.330550    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:59.359978    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:48:59.359989    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:48:59.376907    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:48:59.376917    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:48:59.388755    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:48:59.388766    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:48:59.406625    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:48:59.406638    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:59.425097    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:48:59.425109    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:48:59.439135    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:48:59.439149    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:48:59.459452    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:48:59.459465    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:48:59.487290    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:48:59.487305    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:48:59.505354    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:48:59.505364    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:48:59.517663    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:59.517675    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:59.541887    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:48:59.541895    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:48:59.554613    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:59.554622    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:59.590274    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:59.590286    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:55.315158    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:02.096506    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:00.316223    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:00.316409    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:00.335197    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:49:00.335306    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:00.349188    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:49:00.349268    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:00.360920    4599 logs.go:282] 2 containers: [60d180d33f33 6b3f7166e29d]
	I1025 18:49:00.360995    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:00.371855    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:49:00.371933    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:00.388033    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:49:00.388107    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:00.398733    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:49:00.398809    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:00.408944    4599 logs.go:282] 0 containers: []
	W1025 18:49:00.408961    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:00.409027    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:00.419362    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:49:00.419376    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:00.419380    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:00.454061    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:00.454072    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:00.458301    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:49:00.458310    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:49:00.470039    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:49:00.470051    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:49:00.483770    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:49:00.483781    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:00.495748    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:00.495758    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:00.520431    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:00.520441    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:00.556194    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:49:00.556205    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:49:00.570429    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:49:00.570440    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:49:00.584164    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:49:00.584175    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:49:00.595682    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:49:00.595693    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:49:00.611008    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:49:00.611019    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:49:00.623161    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:49:00.623172    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:49:03.142698    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:07.098905    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:07.099115    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:07.120133    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:49:07.120237    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:07.137531    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:49:07.137638    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:07.150931    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:49:07.151016    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:07.164509    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:49:07.164595    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:07.174782    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:49:07.174849    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:07.185740    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:49:07.185821    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:07.196596    4810 logs.go:282] 0 containers: []
	W1025 18:49:07.196607    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:07.196670    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:07.214442    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:49:07.214467    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:49:07.214473    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:49:07.249514    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:49:07.249532    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:49:07.265689    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:49:07.265701    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:07.278934    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:07.278945    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:07.313751    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:49:07.313764    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:49:07.329203    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:49:07.329214    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:49:07.341751    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:49:07.341764    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:49:07.353658    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:49:07.353672    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:49:07.371299    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:07.371310    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:07.400091    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:49:07.400100    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:49:07.414077    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:49:07.414087    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:49:07.429350    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:49:07.429363    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:49:07.441362    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:07.441374    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:07.465957    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:07.465964    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:07.470829    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:49:07.470838    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:49:07.484575    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:49:07.484585    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:49:07.496834    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:49:07.496843    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:49:10.022826    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:08.145121    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:08.145394    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:08.161956    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:49:08.162058    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:08.174926    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:49:08.175003    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:08.188442    4599 logs.go:282] 2 containers: [60d180d33f33 6b3f7166e29d]
	I1025 18:49:08.188511    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:08.198887    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:49:08.198967    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:08.214546    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:49:08.214632    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:08.225023    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:49:08.225090    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:08.234815    4599 logs.go:282] 0 containers: []
	W1025 18:49:08.234825    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:08.234895    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:08.248747    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:49:08.248762    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:49:08.248768    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:49:08.264061    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:49:08.264075    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:49:08.278964    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:08.278977    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:08.302441    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:08.302449    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:08.336248    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:08.336257    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:08.372344    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:49:08.372355    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:49:08.387099    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:49:08.387110    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:49:08.408976    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:49:08.408988    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:49:08.421222    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:49:08.421233    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:08.434314    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:08.434327    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:08.438722    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:49:08.438730    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:49:08.450670    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:49:08.450681    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:49:08.462451    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:49:08.462462    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:49:15.025259    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:15.025431    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:15.036398    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:49:15.036485    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:15.046660    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:49:15.046743    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:15.057296    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:49:15.057370    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:15.067788    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:49:15.067865    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:15.078297    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:49:15.078375    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:15.089493    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:49:15.089591    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:15.104234    4810 logs.go:282] 0 containers: []
	W1025 18:49:15.104246    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:15.104313    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:15.115502    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:49:15.115545    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:49:15.115553    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:49:15.127558    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:49:15.127569    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:49:15.138948    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:49:15.138958    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:49:15.153673    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:49:15.153688    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:49:15.170664    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:49:15.170673    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:49:15.182273    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:15.182283    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:15.212765    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:15.212775    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:15.251309    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:49:15.251320    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:49:15.265360    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:49:15.265371    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:49:10.986278    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:15.280795    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:49:15.280807    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:49:15.308341    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:15.308355    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:15.312926    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:49:15.312933    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:49:15.326855    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:49:15.326870    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:49:15.345334    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:15.345349    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:15.372023    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:49:15.372030    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:15.383488    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:49:15.383502    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:49:15.395859    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:49:15.395871    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:49:17.908695    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:15.988614    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:15.988834    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:16.013132    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:49:16.013258    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:16.027797    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:49:16.027884    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:16.039757    4599 logs.go:282] 2 containers: [60d180d33f33 6b3f7166e29d]
	I1025 18:49:16.039837    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:16.050380    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:49:16.050458    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:16.060589    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:49:16.060664    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:16.071084    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:49:16.071165    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:16.081258    4599 logs.go:282] 0 containers: []
	W1025 18:49:16.081271    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:16.081338    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:16.091739    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:49:16.091752    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:49:16.091757    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:49:16.103009    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:16.103022    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:16.128527    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:49:16.128538    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:49:16.145503    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:49:16.145514    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:49:16.160522    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:49:16.160536    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:49:16.173041    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:49:16.173052    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:49:16.198761    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:49:16.198772    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:49:16.212698    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:49:16.212710    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:49:16.224343    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:49:16.224354    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:16.236108    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:16.236121    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:16.271427    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:16.271438    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:16.275854    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:16.275863    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:16.309985    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:49:16.309995    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:49:18.826995    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:22.911441    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:22.911607    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:22.923861    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:49:22.923947    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:22.934763    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:49:22.934834    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:22.945313    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:49:22.945391    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:22.956040    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:49:22.956122    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:22.966538    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:49:22.966643    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:22.976950    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:49:22.977027    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:22.987041    4810 logs.go:282] 0 containers: []
	W1025 18:49:22.987053    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:22.987117    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:23.008462    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:49:23.008481    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:23.008486    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:23.041865    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:49:23.041876    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:49:23.056276    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:49:23.056286    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:49:23.070305    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:49:23.070316    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:49:23.088266    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:49:23.088276    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:49:23.099418    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:49:23.099431    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:49:23.116878    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:49:23.116889    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:49:23.128949    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:49:23.128961    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:49:23.147785    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:49:23.147799    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:49:23.162405    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:23.162418    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:23.167263    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:49:23.167269    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:49:23.193955    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:49:23.193966    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:49:23.215299    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:23.215313    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:23.240576    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:23.240584    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:23.275581    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:49:23.275592    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:49:23.287937    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:49:23.287952    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:49:23.305536    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:49:23.305545    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:23.827708    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:23.827894    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:23.840811    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:49:23.840904    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:23.855761    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:49:23.855840    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:23.866650    4599 logs.go:282] 2 containers: [60d180d33f33 6b3f7166e29d]
	I1025 18:49:23.866736    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:23.878383    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:49:23.878454    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:23.888828    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:49:23.888911    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:23.899305    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:49:23.899375    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:23.912522    4599 logs.go:282] 0 containers: []
	W1025 18:49:23.912536    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:23.912596    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:23.926884    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:49:23.926901    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:23.926907    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:23.963460    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:49:23.963470    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:49:23.979027    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:49:23.979038    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:49:23.991732    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:49:23.991745    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:49:24.009002    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:49:24.009013    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:24.021687    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:24.021699    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:24.055584    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:24.055594    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:24.059793    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:49:24.059801    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:49:24.078097    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:49:24.078107    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:49:24.092023    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:49:24.092033    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:49:24.103338    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:49:24.103347    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:49:24.116353    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:49:24.116363    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:49:24.131686    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:24.131698    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:25.819923    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:26.656795    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:30.821588    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:30.821749    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:30.836966    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:49:30.837057    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:30.849485    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:49:30.849566    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:30.865135    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:49:30.865208    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:30.875590    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:49:30.875669    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:30.885785    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:49:30.885862    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:30.896001    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:49:30.896081    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:30.910551    4810 logs.go:282] 0 containers: []
	W1025 18:49:30.910565    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:30.910624    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:30.921684    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:49:30.921699    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:30.921704    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:30.945704    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:30.945712    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:30.984689    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:49:30.984705    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:49:31.000414    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:49:31.000425    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:49:31.013473    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:49:31.013484    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:31.026312    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:49:31.026327    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:49:31.039348    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:49:31.039364    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:49:31.057298    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:49:31.057309    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:49:31.072821    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:49:31.072833    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:49:31.084237    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:49:31.084250    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:49:31.101515    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:49:31.101524    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:49:31.123762    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:49:31.123775    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:49:31.135302    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:31.135313    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:31.164777    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:31.164786    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:31.169001    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:49:31.169009    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:49:31.182958    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:49:31.182970    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:49:31.197012    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:49:31.197023    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:49:33.721539    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:31.659310    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:31.659484    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:31.670348    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:49:31.670431    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:31.681126    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:49:31.681203    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:31.692826    4599 logs.go:282] 2 containers: [60d180d33f33 6b3f7166e29d]
	I1025 18:49:31.692917    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:31.703357    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:49:31.703435    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:31.713724    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:49:31.713809    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:31.725031    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:49:31.725108    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:31.738431    4599 logs.go:282] 0 containers: []
	W1025 18:49:31.738442    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:31.738509    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:31.755380    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:49:31.755399    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:49:31.755407    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:49:31.770095    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:49:31.770108    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:49:31.781329    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:49:31.781340    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:49:31.793449    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:49:31.793458    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:49:31.808665    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:49:31.808673    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:49:31.826710    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:31.826719    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:31.851510    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:49:31.851517    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:31.863133    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:31.863145    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:31.896237    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:31.896243    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:31.900640    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:31.900646    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:31.935385    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:49:31.935402    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:49:31.949415    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:49:31.949425    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:49:31.961194    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:49:31.961210    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:49:34.475039    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:38.724008    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:38.724194    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:38.737315    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:49:38.737393    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:38.747803    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:49:38.747880    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:38.758425    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:49:38.758508    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:38.769016    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:49:38.769111    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:38.779266    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:49:38.779344    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:38.790172    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:49:38.790250    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:38.800847    4810 logs.go:282] 0 containers: []
	W1025 18:49:38.800860    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:38.800937    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:38.811532    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:49:38.811549    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:38.811554    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:38.835173    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:38.835183    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:38.870411    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:49:38.870422    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:49:38.884837    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:49:38.884849    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:49:38.897282    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:49:38.897297    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:49:38.912360    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:49:38.912371    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:49:38.928634    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:49:38.928647    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:49:38.946604    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:49:38.946614    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:49:38.958197    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:49:38.958208    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:49:38.971426    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:49:38.971438    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:49:38.990145    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:49:38.990156    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:39.002583    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:39.002596    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:39.033294    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:49:39.033304    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:49:39.056439    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:49:39.056449    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:49:39.067621    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:49:39.067632    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:49:39.079285    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:39.079296    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:39.083494    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:49:39.083500    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:49:39.477819    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:39.478045    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:39.500874    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:49:39.501004    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:39.517175    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:49:39.517267    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:39.530114    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:49:39.530192    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:39.541558    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:49:39.541638    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:39.552014    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:49:39.552097    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:39.562740    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:49:39.562822    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:39.573120    4599 logs.go:282] 0 containers: []
	W1025 18:49:39.573138    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:39.573200    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:39.584018    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:49:39.584038    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:49:39.584044    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:49:39.598354    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:49:39.598364    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:49:39.620253    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:39.620266    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:39.646308    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:39.646315    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:39.650893    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:39.650903    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:39.686056    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:49:39.686067    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:49:39.700122    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:49:39.700135    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:49:39.711346    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:49:39.711358    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:49:39.728875    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:49:39.728886    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:49:39.745740    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:39.745750    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:39.779508    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:49:39.779516    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:49:39.795218    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:49:39.795229    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:49:39.806767    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:49:39.806779    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:49:39.818148    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:49:39.818161    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:49:39.833643    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:49:39.833657    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:41.599674    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:42.348579    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:46.602095    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:46.602282    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:46.621698    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:49:46.621791    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:46.635444    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:49:46.635521    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:46.647124    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:49:46.647208    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:46.658520    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:49:46.658602    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:46.669957    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:49:46.670039    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:46.681093    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:49:46.681160    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:46.691510    4810 logs.go:282] 0 containers: []
	W1025 18:49:46.691521    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:46.691583    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:46.702568    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:49:46.702584    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:49:46.702589    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:49:46.721029    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:49:46.721042    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:49:46.738294    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:49:46.738305    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:49:46.752579    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:49:46.752592    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:49:46.767819    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:46.767833    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:46.791285    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:49:46.791293    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:49:46.806687    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:46.806696    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:46.837549    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:46.837557    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:46.875267    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:49:46.875279    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:49:46.888953    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:49:46.888963    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:49:46.913639    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:49:46.913648    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:49:46.931018    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:49:46.931033    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:46.943594    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:46.943605    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:46.948112    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:49:46.948119    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:49:46.964065    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:49:46.964075    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:49:46.975409    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:49:46.975421    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:49:47.001863    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:49:47.001874    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:49:49.513484    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:47.351307    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:47.351487    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:47.367992    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:49:47.368079    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:47.380601    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:49:47.380680    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:47.398193    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:49:47.398271    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:47.408913    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:49:47.408983    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:47.419165    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:49:47.419244    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:47.434009    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:49:47.434082    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:47.444477    4599 logs.go:282] 0 containers: []
	W1025 18:49:47.444488    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:47.444550    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:47.455152    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:49:47.455172    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:49:47.455178    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:49:47.470051    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:49:47.470064    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:47.482205    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:47.482221    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:47.516451    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:49:47.516465    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:49:47.529239    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:49:47.529250    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:49:47.540120    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:47.540134    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:47.564055    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:49:47.564064    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:49:47.578565    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:49:47.578575    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:49:47.596447    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:49:47.596456    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:49:47.610774    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:49:47.610783    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:49:47.624897    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:49:47.624908    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:49:47.636403    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:49:47.636414    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:49:47.647787    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:49:47.647798    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:49:47.659170    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:47.659181    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:47.663767    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:47.663774    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:50.201479    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:54.515952    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:54.516200    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:54.536306    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:49:54.536451    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:54.550381    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:49:54.550469    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:54.562937    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:49:54.563053    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:54.573832    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:49:54.573905    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:54.584108    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:49:54.584175    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:54.594375    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:49:54.594446    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:54.604416    4810 logs.go:282] 0 containers: []
	W1025 18:49:54.604426    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:54.604486    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:54.615600    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:49:54.615614    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:54.615620    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:54.651449    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:49:54.651459    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:49:54.673439    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:49:54.673449    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:49:54.690853    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:49:54.690868    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:49:54.708329    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:49:54.708339    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:49:54.720180    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:49:54.720193    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:49:54.732003    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:49:54.732016    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:49:54.744792    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:49:54.744808    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:49:54.769499    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:49:54.769517    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:49:54.784526    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:49:54.784537    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:49:54.796822    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:54.796837    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:54.828300    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:54.828309    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:54.832547    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:49:54.832553    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:49:54.846597    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:49:54.846612    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:49:54.868891    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:49:54.868905    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:49:54.880368    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:54.880379    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:54.903301    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:49:54.903309    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:55.203966    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:55.204084    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:55.215836    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:49:55.215911    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:55.228098    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:49:55.228179    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:55.238682    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:49:55.238762    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:55.249199    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:49:55.249270    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:55.259550    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:49:55.259622    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:55.269755    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:49:55.269820    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:55.280161    4599 logs.go:282] 0 containers: []
	W1025 18:49:55.280175    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:55.280245    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:55.293174    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:49:55.293191    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:49:55.293197    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:49:57.418534    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:55.307107    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:49:55.307117    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:49:55.324110    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:49:55.324122    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:55.336752    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:55.336766    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:55.341527    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:55.341534    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:55.377469    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:49:55.377483    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:49:55.391502    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:49:55.391512    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:49:55.402921    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:49:55.402933    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:49:55.418988    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:49:55.419000    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:49:55.436822    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:55.436832    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:55.470897    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:49:55.470910    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:49:55.482954    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:49:55.482966    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:49:55.494865    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:49:55.494876    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:49:55.506391    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:49:55.506403    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:49:55.518364    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:55.518374    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:58.045791    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:02.421078    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:02.421272    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:02.442220    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:50:02.442318    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:02.455798    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:50:02.455885    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:02.468027    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:50:02.468099    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:02.479987    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:50:02.480059    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:02.490651    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:50:02.490727    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:02.501218    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:50:02.501290    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:02.511683    4810 logs.go:282] 0 containers: []
	W1025 18:50:02.511697    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:02.511760    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:02.522490    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:50:02.522509    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:02.522514    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:02.552929    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:50:02.552943    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:50:02.568161    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:50:02.568177    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:50:02.581364    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:50:02.581382    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:50:02.604153    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:50:02.604163    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:50:02.619010    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:02.619020    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:02.653174    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:50:02.653189    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:50:02.667153    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:50:02.667163    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:50:02.693433    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:50:02.693444    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:50:02.710825    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:02.710835    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:02.736140    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:50:02.736153    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:02.748000    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:02.748015    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:02.752309    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:50:02.752316    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:50:02.764517    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:50:02.764527    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:50:02.776045    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:50:02.776056    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:50:02.787729    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:50:02.787740    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:50:02.802742    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:50:02.802756    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:50:03.048208    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:03.048346    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:03.064329    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:50:03.064423    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:03.074922    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:50:03.075008    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:03.089399    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:50:03.089470    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:03.099811    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:50:03.099877    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:03.113512    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:50:03.113579    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:03.124136    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:50:03.124213    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:03.134376    4599 logs.go:282] 0 containers: []
	W1025 18:50:03.134389    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:03.134452    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:03.149884    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:50:03.149902    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:03.149908    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:03.155091    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:03.155097    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:03.179984    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:50:03.179993    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:50:03.191991    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:50:03.192001    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:50:03.203888    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:03.203898    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:03.237934    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:50:03.237943    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:50:03.254095    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:50:03.254112    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:50:03.268439    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:50:03.268449    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:50:03.283455    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:50:03.283468    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:50:03.299346    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:50:03.299356    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:50:03.325667    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:50:03.325682    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:50:03.345014    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:50:03.345025    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:03.357754    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:03.357766    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:03.393636    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:50:03.393649    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:50:03.405365    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:50:03.405377    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:50:05.326182    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:05.919588    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:10.328567    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:10.328879    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:10.381632    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:50:10.381728    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:10.399256    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:50:10.399341    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:10.414080    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:50:10.414161    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:10.425175    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:50:10.425261    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:10.435740    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:50:10.435813    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:10.446033    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:50:10.446112    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:10.458287    4810 logs.go:282] 0 containers: []
	W1025 18:50:10.458298    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:10.458363    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:10.468884    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:50:10.468909    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:10.468918    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:10.505157    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:50:10.505168    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:50:10.519497    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:50:10.519511    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:50:10.533233    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:50:10.533243    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:50:10.547680    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:50:10.547694    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:50:10.564685    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:50:10.564697    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:10.580413    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:10.580424    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:10.602953    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:10.602961    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:10.607308    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:50:10.607314    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:50:10.619135    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:50:10.619148    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:50:10.642245    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:50:10.642259    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:50:10.654501    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:50:10.654512    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:50:10.671301    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:50:10.671311    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:50:10.682486    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:50:10.682496    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:50:10.699692    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:50:10.699702    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:50:10.711752    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:10.711766    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:10.740351    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:50:10.740362    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:50:13.255112    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:10.922069    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:10.922212    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:10.936167    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:50:10.936253    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:10.947321    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:50:10.947392    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:10.958278    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:50:10.958368    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:10.969557    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:50:10.969629    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:10.979778    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:50:10.979859    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:10.990697    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:50:10.990777    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:11.000686    4599 logs.go:282] 0 containers: []
	W1025 18:50:11.000700    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:11.000765    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:11.019264    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:50:11.019282    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:50:11.019287    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:50:11.031406    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:50:11.031417    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:50:11.050226    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:50:11.050237    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:50:11.061648    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:11.061659    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:11.088018    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:50:11.088027    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:50:11.102009    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:50:11.102019    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:50:11.113932    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:11.113942    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:11.153842    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:50:11.153853    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:50:11.166917    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:50:11.166928    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:50:11.178884    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:11.178895    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:11.212811    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:11.212817    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:11.216982    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:50:11.216991    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:11.228140    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:50:11.228152    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:50:11.242465    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:50:11.242481    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:50:11.265092    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:50:11.265102    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:50:13.786663    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:18.257676    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:18.258143    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:18.288992    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:50:18.289137    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:18.308438    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:50:18.308551    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:18.322108    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:50:18.322197    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:18.334360    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:50:18.334436    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:18.350320    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:50:18.350401    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:18.361492    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:50:18.361576    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:18.374134    4810 logs.go:282] 0 containers: []
	W1025 18:50:18.374147    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:18.374208    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:18.385078    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:50:18.385096    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:50:18.385101    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:50:18.397743    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:50:18.397756    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:50:18.408875    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:50:18.408886    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:50:18.422510    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:18.422522    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:18.458731    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:50:18.458746    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:50:18.473949    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:50:18.473964    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:50:18.497235    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:50:18.497245    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:50:18.513375    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:50:18.513385    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:50:18.526429    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:18.526441    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:18.531142    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:50:18.531148    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:50:18.546081    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:50:18.546095    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:50:18.563435    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:50:18.563448    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:50:18.575558    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:18.575572    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:18.598536    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:50:18.598543    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:18.610751    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:18.610767    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:18.640304    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:50:18.640312    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:50:18.657736    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:50:18.657749    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:50:18.787896    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:18.788064    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:18.799198    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:50:18.799274    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:18.809321    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:50:18.809400    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:18.824407    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:50:18.824491    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:18.834685    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:50:18.834764    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:18.845733    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:50:18.845811    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:18.856584    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:50:18.856653    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:18.866958    4599 logs.go:282] 0 containers: []
	W1025 18:50:18.866969    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:18.867030    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:18.877301    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:50:18.877324    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:50:18.877329    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:50:18.892177    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:50:18.892191    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:18.904185    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:18.904195    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:18.939648    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:50:18.939658    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:50:18.955557    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:50:18.955567    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:50:18.967547    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:50:18.967561    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:50:18.979077    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:50:18.979088    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:50:18.994311    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:18.994325    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:19.030160    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:19.030173    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:19.035192    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:50:19.035201    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:50:19.046740    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:19.046751    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:19.072209    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:50:19.072217    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:50:19.085996    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:50:19.086005    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:50:19.097671    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:50:19.097683    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:50:19.109827    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:50:19.109840    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:50:21.172456    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:21.629944    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:26.175050    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:26.175536    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:26.211506    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:50:26.211677    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:26.232667    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:50:26.232785    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:26.247522    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:50:26.247606    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:26.259952    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:50:26.260042    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:26.270548    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:50:26.270629    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:26.281083    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:50:26.281159    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:26.290738    4810 logs.go:282] 0 containers: []
	W1025 18:50:26.290753    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:26.290822    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:26.301508    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:50:26.301525    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:50:26.301531    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:50:26.316733    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:50:26.316743    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:50:26.334447    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:50:26.334461    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:50:26.345655    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:50:26.345666    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:50:26.358530    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:50:26.358543    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:50:26.369841    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:50:26.369854    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:50:26.383773    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:50:26.383787    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:50:26.398225    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:50:26.398235    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:50:26.418324    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:26.418335    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:26.442047    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:50:26.442065    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:50:26.453616    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:50:26.453627    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:50:26.477302    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:50:26.477322    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:50:26.489090    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:50:26.489101    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:26.501092    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:26.501107    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:26.531378    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:26.531387    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:26.535803    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:26.535811    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:26.571788    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:50:26.571800    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:50:29.091190    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:26.632420    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:26.632503    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:26.643029    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:50:26.643120    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:26.653767    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:50:26.653845    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:26.664083    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:50:26.664155    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:26.674811    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:50:26.674894    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:26.684951    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:50:26.685027    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:26.695017    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:50:26.695090    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:26.712156    4599 logs.go:282] 0 containers: []
	W1025 18:50:26.712170    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:26.712242    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:26.724765    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:50:26.724786    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:50:26.724791    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:50:26.737795    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:50:26.737809    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:50:26.749637    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:50:26.749647    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:50:26.767547    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:50:26.767560    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:26.779783    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:26.779793    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:26.784716    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:50:26.784723    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:50:26.802264    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:50:26.802275    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:50:26.814265    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:50:26.814277    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:50:26.826862    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:50:26.826873    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:50:26.844193    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:26.844203    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:26.879645    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:26.879655    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:26.904109    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:26.904119    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:26.940518    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:50:26.940529    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:50:26.954216    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:50:26.954227    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:50:26.965911    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:50:26.965922    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:50:29.479459    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:34.093703    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:34.093894    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:34.107114    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:50:34.107189    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:34.121554    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:50:34.121635    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:34.132450    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:50:34.132530    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:34.143424    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:50:34.143506    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:34.153959    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:50:34.154040    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:34.164290    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:50:34.164372    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:34.174961    4810 logs.go:282] 0 containers: []
	W1025 18:50:34.174973    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:34.175040    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:34.185553    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:50:34.185569    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:34.185575    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:34.189637    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:50:34.189645    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:50:34.211667    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:50:34.211677    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:50:34.223438    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:50:34.223449    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:50:34.240297    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:50:34.240307    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:50:34.251431    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:34.251442    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:34.276353    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:34.276361    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:34.310470    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:50:34.310482    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:50:34.325804    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:50:34.325815    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:50:34.348836    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:50:34.348845    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:34.360434    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:34.360447    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:34.388314    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:50:34.388322    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:50:34.405612    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:50:34.405626    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:50:34.420068    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:50:34.420078    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:50:34.437521    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:50:34.437533    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:50:34.449436    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:50:34.449449    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:50:34.463277    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:50:34.463287    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:50:34.481184    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:34.481282    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:34.492395    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:50:34.492478    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:34.502685    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:50:34.502765    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:34.513763    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:50:34.513859    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:34.525530    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:50:34.525608    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:34.536440    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:50:34.536509    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:34.547012    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:50:34.547082    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:34.557398    4599 logs.go:282] 0 containers: []
	W1025 18:50:34.557410    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:34.557468    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:34.568118    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:50:34.568135    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:34.568141    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:34.604178    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:50:34.604188    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:50:34.619080    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:50:34.619092    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:50:34.634669    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:50:34.634679    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:50:34.652596    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:50:34.652606    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:50:34.666554    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:50:34.666568    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:50:34.678346    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:34.678357    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:34.715279    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:50:34.715293    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:50:34.729121    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:50:34.729132    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:50:34.742035    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:50:34.742049    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:50:34.757999    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:50:34.758011    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:34.770645    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:34.770657    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:34.775609    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:50:34.775620    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:50:34.791279    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:50:34.791292    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:50:34.804619    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:34.804630    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:36.981426    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:37.330380    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:41.982025    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:41.982357    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:42.008354    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:50:42.008486    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:42.025958    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:50:42.026054    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:42.039766    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:50:42.039845    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:42.051326    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:50:42.051399    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:42.062282    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:50:42.062368    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:42.072686    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:50:42.072758    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:42.083026    4810 logs.go:282] 0 containers: []
	W1025 18:50:42.083043    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:42.083111    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:42.093806    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:50:42.093823    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:50:42.093828    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:42.106066    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:42.106080    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:42.135738    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:42.135749    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:42.140027    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:50:42.140032    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:50:42.156812    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:50:42.156828    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:50:42.174524    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:50:42.174539    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:50:42.185528    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:42.185539    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:42.220142    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:50:42.220159    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:50:42.235272    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:50:42.235286    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:50:42.249325    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:50:42.249334    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:50:42.263816    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:50:42.263826    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:50:42.281503    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:50:42.281517    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:50:42.298702    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:42.298712    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:42.323572    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:50:42.323580    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:50:42.348811    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:50:42.348823    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:50:42.365509    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:50:42.365520    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:50:42.385167    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:50:42.385176    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:50:44.899082    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:42.332984    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:42.333086    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:42.348280    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:50:42.348359    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:42.360176    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:50:42.360252    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:42.371794    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:50:42.371900    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:42.382871    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:50:42.382948    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:42.394958    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:50:42.395037    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:42.406286    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:50:42.406362    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:42.421032    4599 logs.go:282] 0 containers: []
	W1025 18:50:42.421044    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:42.421112    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:42.431939    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:50:42.431959    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:50:42.431965    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:50:42.443613    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:50:42.443622    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:50:42.455305    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:50:42.455317    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:42.467078    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:50:42.467091    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:50:42.484686    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:50:42.484697    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:50:42.496224    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:50:42.496235    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:50:42.511204    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:50:42.511219    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:50:42.529509    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:42.529522    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:42.554133    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:42.554148    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:42.589939    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:42.589948    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:42.594852    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:50:42.594861    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:50:42.606312    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:50:42.606327    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:50:42.618663    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:50:42.618674    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:50:42.630518    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:42.630529    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:42.666500    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:50:42.666510    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:50:45.182656    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:49.901601    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:49.901771    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:49.925784    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:50:49.925873    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:49.938651    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:50:49.938729    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:49.953391    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:50:49.953467    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:49.963722    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:50:49.963800    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:49.973827    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:50:49.973892    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:49.984192    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:50:49.984268    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:49.994411    4810 logs.go:282] 0 containers: []
	W1025 18:50:49.994423    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:49.994484    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:50.004950    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:50:50.004967    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:50:50.004973    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:50:50.028229    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:50:50.028246    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:50:50.045989    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:50:50.045999    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:50:50.066418    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:50.066428    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:50.071032    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:50.071039    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:50.106083    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:50:50.106097    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:50:50.121561    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:50:50.121575    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:50:50.135031    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:50:50.135048    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:50:50.147389    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:50:50.147402    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:50:50.159124    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:50.159136    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:50.189906    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:50:50.189921    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:50:50.208184    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:50:50.208193    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:50:50.220885    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:50.220894    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:50.245755    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:50:50.245767    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:50:50.259838    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:50:50.259851    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:50:50.275518    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:50:50.275531    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:50:50.185022    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:50.185119    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:50.196630    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:50:50.196715    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:50.207912    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:50:50.207986    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:50.219199    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:50:50.219285    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:50.241717    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:50:50.241801    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:50.252922    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:50:50.253000    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:50.264661    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:50:50.264748    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:50.281283    4599 logs.go:282] 0 containers: []
	W1025 18:50:50.281296    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:50.281375    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:50.293397    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:50:50.293415    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:50.293421    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:50.288642    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:50:50.288655    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:52.803439    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:50.298823    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:50:50.298913    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:50:50.315092    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:50:50.315105    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:50:50.327624    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:50:50.327635    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:50:50.339090    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:50:50.339106    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:50:50.351591    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:50:50.351607    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:50:50.370320    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:50:50.370334    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:50:50.391427    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:50.391437    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:50.426946    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:50.426954    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:50.462036    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:50:50.462049    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:50:50.480592    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:50.480605    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:50.504793    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:50:50.504803    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:50.516233    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:50:50.516248    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:50:50.531159    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:50:50.531169    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:50:50.543066    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:50:50.543081    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:50:53.057567    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:57.806109    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:57.806249    4810 kubeadm.go:597] duration metric: took 4m3.500441584s to restartPrimaryControlPlane
	W1025 18:50:57.806379    4810 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1025 18:50:57.806430    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1025 18:50:58.870417    4810 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.063948584s)
	I1025 18:50:58.870489    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:50:58.875171    4810 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:50:58.878566    4810 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:50:58.881701    4810 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:50:58.881708    4810 kubeadm.go:157] found existing configuration files:
	
	I1025 18:50:58.881743    4810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/admin.conf
	I1025 18:50:58.884462    4810 kubeadm.go:163] "https://control-plane.minikube.internal:62543" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 18:50:58.884492    4810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 18:50:58.887100    4810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/kubelet.conf
	I1025 18:50:58.890135    4810 kubeadm.go:163] "https://control-plane.minikube.internal:62543" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 18:50:58.890162    4810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 18:50:58.892830    4810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/controller-manager.conf
	I1025 18:50:58.895190    4810 kubeadm.go:163] "https://control-plane.minikube.internal:62543" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 18:50:58.895214    4810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 18:50:58.898198    4810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/scheduler.conf
	I1025 18:50:58.901074    4810 kubeadm.go:163] "https://control-plane.minikube.internal:62543" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 18:50:58.901105    4810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 18:50:58.903724    4810 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 18:50:58.919921    4810 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1025 18:50:58.920032    4810 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 18:50:58.972154    4810 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 18:50:58.972209    4810 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 18:50:58.972265    4810 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 18:50:59.020674    4810 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 18:50:59.024872    4810 out.go:235]   - Generating certificates and keys ...
	I1025 18:50:59.024909    4810 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 18:50:59.024941    4810 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 18:50:59.024982    4810 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 18:50:59.025014    4810 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1025 18:50:59.025049    4810 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 18:50:59.025082    4810 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1025 18:50:59.025114    4810 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1025 18:50:59.025153    4810 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1025 18:50:59.025191    4810 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 18:50:59.025228    4810 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 18:50:59.025245    4810 kubeadm.go:310] [certs] Using the existing "sa" key
	I1025 18:50:59.025274    4810 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 18:50:59.087096    4810 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 18:50:59.206299    4810 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 18:50:59.268475    4810 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 18:50:59.352682    4810 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 18:50:59.384287    4810 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:50:59.384679    4810 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:50:59.384776    4810 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 18:50:59.466938    4810 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 18:50:59.471104    4810 out.go:235]   - Booting up control plane ...
	I1025 18:50:59.471147    4810 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 18:50:59.471191    4810 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 18:50:59.471230    4810 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 18:50:59.471276    4810 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 18:50:59.471430    4810 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 18:50:58.059608    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:58.059726    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:58.071319    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:50:58.071409    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:58.082712    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:50:58.082828    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:58.094647    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:50:58.094735    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:58.107186    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:50:58.107266    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:58.124101    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:50:58.124185    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:58.139478    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:50:58.139563    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:58.153563    4599 logs.go:282] 0 containers: []
	W1025 18:50:58.153573    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:58.153647    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:58.165007    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:50:58.165024    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:50:58.165029    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:50:58.183221    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:50:58.183236    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:50:58.195446    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:58.195458    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:58.222291    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:50:58.222306    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:50:58.235595    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:50:58.235611    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:50:58.251967    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:50:58.251980    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:50:58.272599    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:58.272616    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:58.277642    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:50:58.277656    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:50:58.293814    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:50:58.293831    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:50:58.311930    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:50:58.311945    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:50:58.324780    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:50:58.324797    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:50:58.338259    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:50:58.338271    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:58.350640    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:58.350652    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:58.388535    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:58.388560    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:58.426631    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:50:58.426648    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:51:03.974804    4810 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503011 seconds
	I1025 18:51:03.974868    4810 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 18:51:03.978629    4810 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 18:51:04.497698    4810 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 18:51:04.498197    4810 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-473000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 18:51:05.002531    4810 kubeadm.go:310] [bootstrap-token] Using token: c28pqe.2cav7zn00sxzo3a6
	I1025 18:51:05.005691    4810 out.go:235]   - Configuring RBAC rules ...
	I1025 18:51:05.005752    4810 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 18:51:05.005793    4810 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 18:51:05.008084    4810 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 18:51:05.009118    4810 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 18:51:05.010155    4810 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 18:51:05.011057    4810 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 18:51:05.014165    4810 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 18:51:05.162972    4810 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1025 18:51:00.941757    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:05.406701    4810 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1025 18:51:05.407157    4810 kubeadm.go:310] 
	I1025 18:51:05.407196    4810 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1025 18:51:05.407201    4810 kubeadm.go:310] 
	I1025 18:51:05.407243    4810 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1025 18:51:05.407246    4810 kubeadm.go:310] 
	I1025 18:51:05.407258    4810 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1025 18:51:05.407283    4810 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 18:51:05.407305    4810 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 18:51:05.407309    4810 kubeadm.go:310] 
	I1025 18:51:05.407346    4810 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1025 18:51:05.407352    4810 kubeadm.go:310] 
	I1025 18:51:05.407377    4810 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 18:51:05.407381    4810 kubeadm.go:310] 
	I1025 18:51:05.407413    4810 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1025 18:51:05.407452    4810 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 18:51:05.407509    4810 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 18:51:05.407512    4810 kubeadm.go:310] 
	I1025 18:51:05.407555    4810 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 18:51:05.407613    4810 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1025 18:51:05.407617    4810 kubeadm.go:310] 
	I1025 18:51:05.407665    4810 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token c28pqe.2cav7zn00sxzo3a6 \
	I1025 18:51:05.407713    4810 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b9d1b51b46aa29bee5add6dcd2f2839d068831832311340de43d2611a1555cef \
	I1025 18:51:05.407730    4810 kubeadm.go:310] 	--control-plane 
	I1025 18:51:05.407733    4810 kubeadm.go:310] 
	I1025 18:51:05.407773    4810 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1025 18:51:05.407776    4810 kubeadm.go:310] 
	I1025 18:51:05.407819    4810 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token c28pqe.2cav7zn00sxzo3a6 \
	I1025 18:51:05.407901    4810 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b9d1b51b46aa29bee5add6dcd2f2839d068831832311340de43d2611a1555cef 
	I1025 18:51:05.407978    4810 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:51:05.408035    4810 cni.go:84] Creating CNI manager for ""
	I1025 18:51:05.408046    4810 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:51:05.415414    4810 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 18:51:05.419496    4810 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 18:51:05.422589    4810 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 18:51:05.427655    4810 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 18:51:05.427705    4810 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:51:05.427732    4810 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-473000 minikube.k8s.io/updated_at=2024_10_25T18_51_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=stopped-upgrade-473000 minikube.k8s.io/primary=true
	I1025 18:51:05.469049    4810 ops.go:34] apiserver oom_adj: -16
	I1025 18:51:05.469045    4810 kubeadm.go:1113] duration metric: took 41.38025ms to wait for elevateKubeSystemPrivileges
	I1025 18:51:05.469065    4810 kubeadm.go:394] duration metric: took 4m11.1758825s to StartCluster
	I1025 18:51:05.469076    4810 settings.go:142] acquiring lock: {Name:mk3ff32802ddfc6c1e0425afbf853ac78c436759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:51:05.469178    4810 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:51:05.469602    4810 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/kubeconfig: {Name:mk88d1ac601cc80b64027f8557b82969027e8e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:51:05.469812    4810 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:51:05.469818    4810 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 18:51:05.469863    4810 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-473000"
	I1025 18:51:05.469871    4810 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-473000"
	W1025 18:51:05.469876    4810 addons.go:243] addon storage-provisioner should already be in state true
	I1025 18:51:05.469887    4810 host.go:66] Checking if "stopped-upgrade-473000" exists ...
	I1025 18:51:05.469902    4810 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-473000"
	I1025 18:51:05.469909    4810 config.go:182] Loaded profile config "stopped-upgrade-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:51:05.469910    4810 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-473000"
	I1025 18:51:05.474411    4810 out.go:177] * Verifying Kubernetes components...
	I1025 18:51:05.475038    4810 kapi.go:59] client config for stopped-upgrade-473000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/client.key", CAFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104e52680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:51:05.478775    4810 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-473000"
	W1025 18:51:05.478780    4810 addons.go:243] addon default-storageclass should already be in state true
	I1025 18:51:05.478787    4810 host.go:66] Checking if "stopped-upgrade-473000" exists ...
	I1025 18:51:05.479306    4810 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 18:51:05.479312    4810 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 18:51:05.479318    4810 sshutil.go:53] new ssh client: &{IP:localhost Port:62508 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/id_rsa Username:docker}
	I1025 18:51:05.482244    4810 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:51:05.486440    4810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:51:05.490438    4810 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:51:05.490445    4810 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 18:51:05.490451    4810 sshutil.go:53] new ssh client: &{IP:localhost Port:62508 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/id_rsa Username:docker}
	I1025 18:51:05.578124    4810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 18:51:05.586221    4810 api_server.go:52] waiting for apiserver process to appear ...
	I1025 18:51:05.586292    4810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:51:05.590246    4810 api_server.go:72] duration metric: took 120.418916ms to wait for apiserver process to appear ...
	I1025 18:51:05.590257    4810 api_server.go:88] waiting for apiserver healthz status ...
	I1025 18:51:05.590264    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:05.603346    4810 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:51:05.659440    4810 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 18:51:05.971309    4810 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 18:51:05.971323    4810 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 18:51:05.942484    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:05.942608    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:51:05.955286    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:51:05.955372    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:51:05.967151    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:51:05.967241    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:51:05.978770    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:51:05.978847    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:51:05.990081    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:51:05.990160    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:51:06.000726    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:51:06.000802    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:51:06.013332    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:51:06.013410    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:51:06.023247    4599 logs.go:282] 0 containers: []
	W1025 18:51:06.023259    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:51:06.023322    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:51:06.033993    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:51:06.034012    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:51:06.034018    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:51:06.045834    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:51:06.045845    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:51:06.058200    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:51:06.058213    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:51:06.072198    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:51:06.072211    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:51:06.090248    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:51:06.090261    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:51:06.102063    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:51:06.102076    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:51:06.137531    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:51:06.137547    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:51:06.151951    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:51:06.151965    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:51:06.163750    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:51:06.163766    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:51:06.176132    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:51:06.176144    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:51:06.188132    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:51:06.188145    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:51:06.212535    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:51:06.212545    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:51:06.217040    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:51:06.217047    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:51:06.234130    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:51:06.234142    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:51:06.249120    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:51:06.249132    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:51:08.786338    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:10.592538    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:10.592586    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:13.788735    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:13.788884    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:51:13.799520    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:51:13.799600    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:51:13.810352    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:51:13.810429    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:51:13.821403    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:51:13.821475    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:51:13.831821    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:51:13.831905    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:51:13.843166    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:51:13.843229    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:51:13.853933    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:51:13.854011    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:51:13.863767    4599 logs.go:282] 0 containers: []
	W1025 18:51:13.863779    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:51:13.863846    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:51:13.874435    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:51:13.874453    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:51:13.874460    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:51:13.886559    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:51:13.886570    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:51:13.898771    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:51:13.898783    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:51:13.934628    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:51:13.934640    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:51:13.946661    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:51:13.946672    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:51:13.967239    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:51:13.967250    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:51:13.980543    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:51:13.980556    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:51:13.985065    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:51:13.985072    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:51:13.997027    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:51:13.997038    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:51:14.009645    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:51:14.009656    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:51:14.032708    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:51:14.032718    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:51:14.066356    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:51:14.066366    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:51:14.080654    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:51:14.080665    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:51:14.093311    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:51:14.093322    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:51:14.110623    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:51:14.110633    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:51:15.592984    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:15.593005    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:16.627492    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:20.593831    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:20.593871    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:21.629901    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:21.630135    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:51:21.653572    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:51:21.653691    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:51:21.669979    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:51:21.670079    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:51:21.683104    4599 logs.go:282] 4 containers: [dbf479c07baa 73883d1045df 60d180d33f33 6b3f7166e29d]
	I1025 18:51:21.683191    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:51:21.694736    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:51:21.694806    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:51:21.705420    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:51:21.705498    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:51:21.715483    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:51:21.715558    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:51:21.726482    4599 logs.go:282] 0 containers: []
	W1025 18:51:21.726494    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:51:21.726561    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:51:21.736601    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:51:21.736623    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:51:21.736628    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:51:21.750876    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:51:21.750887    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:51:21.763154    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:51:21.763167    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:51:21.775070    4599 logs.go:123] Gathering logs for coredns [6b3f7166e29d] ...
	I1025 18:51:21.775081    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6b3f7166e29d"
	I1025 18:51:21.787648    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:51:21.787659    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:51:21.812534    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:51:21.812543    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:51:21.816932    4599 logs.go:123] Gathering logs for coredns [60d180d33f33] ...
	I1025 18:51:21.816938    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 60d180d33f33"
	I1025 18:51:21.828646    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:51:21.828657    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:51:21.843630    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:51:21.843644    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:51:21.856153    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:51:21.856166    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:51:21.892345    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:51:21.892357    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:51:21.910327    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:51:21.910342    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:51:21.922323    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:51:21.922334    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:51:21.958319    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:51:21.958328    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:51:21.972664    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:51:21.972676    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:51:24.490255    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:25.594605    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:25.594669    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:29.492700    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:29.492851    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:51:29.505171    4599 logs.go:282] 1 containers: [9f50947853ee]
	I1025 18:51:29.505251    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:51:29.515619    4599 logs.go:282] 1 containers: [b4af9e497fc5]
	I1025 18:51:29.515697    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:51:29.526320    4599 logs.go:282] 4 containers: [fece211667ee d3bfc54bf916 dbf479c07baa 73883d1045df]
	I1025 18:51:29.526403    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:51:29.536574    4599 logs.go:282] 1 containers: [133f8fc5cb40]
	I1025 18:51:29.536653    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:51:29.546782    4599 logs.go:282] 1 containers: [47e90c0d1ecf]
	I1025 18:51:29.546849    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:51:29.557391    4599 logs.go:282] 1 containers: [1e58ff0149c0]
	I1025 18:51:29.557468    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:51:29.567614    4599 logs.go:282] 0 containers: []
	W1025 18:51:29.567624    4599 logs.go:284] No container was found matching "kindnet"
	I1025 18:51:29.567684    4599 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:51:29.578660    4599 logs.go:282] 1 containers: [4377af36f915]
	I1025 18:51:29.578677    4599 logs.go:123] Gathering logs for dmesg ...
	I1025 18:51:29.578683    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:51:29.583262    4599 logs.go:123] Gathering logs for etcd [b4af9e497fc5] ...
	I1025 18:51:29.583272    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4af9e497fc5"
	I1025 18:51:29.596421    4599 logs.go:123] Gathering logs for kube-scheduler [133f8fc5cb40] ...
	I1025 18:51:29.596434    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 133f8fc5cb40"
	I1025 18:51:29.612606    4599 logs.go:123] Gathering logs for container status ...
	I1025 18:51:29.612619    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:51:29.624478    4599 logs.go:123] Gathering logs for kube-apiserver [9f50947853ee] ...
	I1025 18:51:29.624489    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9f50947853ee"
	I1025 18:51:29.638796    4599 logs.go:123] Gathering logs for coredns [fece211667ee] ...
	I1025 18:51:29.638806    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fece211667ee"
	I1025 18:51:29.649549    4599 logs.go:123] Gathering logs for coredns [d3bfc54bf916] ...
	I1025 18:51:29.649558    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d3bfc54bf916"
	I1025 18:51:29.664694    4599 logs.go:123] Gathering logs for Docker ...
	I1025 18:51:29.664708    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:51:29.689178    4599 logs.go:123] Gathering logs for kubelet ...
	I1025 18:51:29.689185    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:51:29.724260    4599 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:51:29.724269    4599 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:51:29.761173    4599 logs.go:123] Gathering logs for kube-proxy [47e90c0d1ecf] ...
	I1025 18:51:29.761185    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 47e90c0d1ecf"
	I1025 18:51:29.773338    4599 logs.go:123] Gathering logs for kube-controller-manager [1e58ff0149c0] ...
	I1025 18:51:29.773349    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e58ff0149c0"
	I1025 18:51:29.790749    4599 logs.go:123] Gathering logs for coredns [dbf479c07baa] ...
	I1025 18:51:29.790760    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbf479c07baa"
	I1025 18:51:29.802836    4599 logs.go:123] Gathering logs for coredns [73883d1045df] ...
	I1025 18:51:29.802848    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 73883d1045df"
	I1025 18:51:29.814516    4599 logs.go:123] Gathering logs for storage-provisioner [4377af36f915] ...
	I1025 18:51:29.814528    4599 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4377af36f915"
	I1025 18:51:30.595494    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:30.595517    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:32.328827    4599 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:35.596046    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:35.596118    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1025 18:51:35.974291    4810 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1025 18:51:35.978220    4810 out.go:177] * Enabled addons: storage-provisioner
	I1025 18:51:37.331255    4599 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:37.335956    4599 out.go:201] 
	W1025 18:51:37.339777    4599 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1025 18:51:37.339788    4599 out.go:270] * 
	W1025 18:51:37.340562    4599 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:51:37.350775    4599 out.go:201] 
	I1025 18:51:35.988166    4810 addons.go:510] duration metric: took 30.5176355s for enable addons: enabled=[storage-provisioner]
	I1025 18:51:40.597320    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:40.597369    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:45.598933    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:45.599008    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Sat 2024-10-26 01:42:39 UTC, ends at Sat 2024-10-26 01:51:53 UTC. --
	Oct 26 01:51:30 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:30Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 26 01:51:35 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:35Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 26 01:51:37 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:37Z" level=error msg="ContainerStats resp: {0x400094c800 linux}"
	Oct 26 01:51:37 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:37Z" level=error msg="ContainerStats resp: {0x400094d380 linux}"
	Oct 26 01:51:38 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:38Z" level=error msg="ContainerStats resp: {0x4000a2f400 linux}"
	Oct 26 01:51:39 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:39Z" level=error msg="ContainerStats resp: {0x4000a2fd80 linux}"
	Oct 26 01:51:39 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:39Z" level=error msg="ContainerStats resp: {0x4000610280 linux}"
	Oct 26 01:51:39 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:39Z" level=error msg="ContainerStats resp: {0x4000358700 linux}"
	Oct 26 01:51:39 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:39Z" level=error msg="ContainerStats resp: {0x4000610bc0 linux}"
	Oct 26 01:51:39 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:39Z" level=error msg="ContainerStats resp: {0x4000359900 linux}"
	Oct 26 01:51:39 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:39Z" level=error msg="ContainerStats resp: {0x4000610f80 linux}"
	Oct 26 01:51:39 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:39Z" level=error msg="ContainerStats resp: {0x4000359f00 linux}"
	Oct 26 01:51:40 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:40Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 26 01:51:45 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:45Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 26 01:51:50 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:50Z" level=error msg="ContainerStats resp: {0x40008e0b80 linux}"
	Oct 26 01:51:50 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:50Z" level=error msg="ContainerStats resp: {0x400085af80 linux}"
	Oct 26 01:51:50 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:50Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 26 01:51:51 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:51Z" level=error msg="ContainerStats resp: {0x4000610500 linux}"
	Oct 26 01:51:52 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:52Z" level=error msg="ContainerStats resp: {0x4000611640 linux}"
	Oct 26 01:51:52 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:52Z" level=error msg="ContainerStats resp: {0x400041f200 linux}"
	Oct 26 01:51:52 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:52Z" level=error msg="ContainerStats resp: {0x400041f8c0 linux}"
	Oct 26 01:51:52 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:52Z" level=error msg="ContainerStats resp: {0x400041fe40 linux}"
	Oct 26 01:51:52 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:52Z" level=error msg="ContainerStats resp: {0x40000b9640 linux}"
	Oct 26 01:51:52 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:52Z" level=error msg="ContainerStats resp: {0x400083e200 linux}"
	Oct 26 01:51:52 running-upgrade-889000 cri-dockerd[3060]: time="2024-10-26T01:51:52Z" level=error msg="ContainerStats resp: {0x4000610f40 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	fece211667eeb       edaa71f2aee88       25 seconds ago      Running             coredns                   2                   1b4f1c2fd006e
	d3bfc54bf916b       edaa71f2aee88       26 seconds ago      Running             coredns                   2                   8d090324c9415
	dbf479c07baa8       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   1b4f1c2fd006e
	73883d1045dfa       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   8d090324c9415
	4377af36f9151       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   3f37bdc2dfcda
	47e90c0d1ecfb       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   0acb6eed15638
	b4af9e497fc53       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   13a79e00af638
	9f50947853ee9       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   0b7245cc5290d
	133f8fc5cb40f       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   1e77ab759b151
	1e58ff0149c06       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   0a1aafd029506
	
	
	==> coredns [73883d1045df] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5443899079039316651.5765258304832703725. HINFO: read udp 10.244.0.3:49884->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5443899079039316651.5765258304832703725. HINFO: read udp 10.244.0.3:37363->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5443899079039316651.5765258304832703725. HINFO: read udp 10.244.0.3:39281->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5443899079039316651.5765258304832703725. HINFO: read udp 10.244.0.3:32891->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5443899079039316651.5765258304832703725. HINFO: read udp 10.244.0.3:45510->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5443899079039316651.5765258304832703725. HINFO: read udp 10.244.0.3:37020->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5443899079039316651.5765258304832703725. HINFO: read udp 10.244.0.3:50869->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5443899079039316651.5765258304832703725. HINFO: read udp 10.244.0.3:46401->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5443899079039316651.5765258304832703725. HINFO: read udp 10.244.0.3:43695->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5443899079039316651.5765258304832703725. HINFO: read udp 10.244.0.3:58844->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d3bfc54bf916] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8289106690795570046.8155810695733425993. HINFO: read udp 10.244.0.3:45218->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8289106690795570046.8155810695733425993. HINFO: read udp 10.244.0.3:35516->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8289106690795570046.8155810695733425993. HINFO: read udp 10.244.0.3:53350->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8289106690795570046.8155810695733425993. HINFO: read udp 10.244.0.3:49687->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8289106690795570046.8155810695733425993. HINFO: read udp 10.244.0.3:54551->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8289106690795570046.8155810695733425993. HINFO: read udp 10.244.0.3:58146->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8289106690795570046.8155810695733425993. HINFO: read udp 10.244.0.3:45137->10.0.2.3:53: i/o timeout
	
	
	==> coredns [dbf479c07baa] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 9168868447392964767.2623441550425477144. HINFO: read udp 10.244.0.2:55477->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9168868447392964767.2623441550425477144. HINFO: read udp 10.244.0.2:36060->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9168868447392964767.2623441550425477144. HINFO: read udp 10.244.0.2:55398->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9168868447392964767.2623441550425477144. HINFO: read udp 10.244.0.2:50571->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9168868447392964767.2623441550425477144. HINFO: read udp 10.244.0.2:47682->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9168868447392964767.2623441550425477144. HINFO: read udp 10.244.0.2:45666->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9168868447392964767.2623441550425477144. HINFO: read udp 10.244.0.2:36049->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9168868447392964767.2623441550425477144. HINFO: read udp 10.244.0.2:49017->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9168868447392964767.2623441550425477144. HINFO: read udp 10.244.0.2:41422->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9168868447392964767.2623441550425477144. HINFO: read udp 10.244.0.2:37114->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fece211667ee] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1300977023883813396.2063248099453457359. HINFO: read udp 10.244.0.2:46318->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1300977023883813396.2063248099453457359. HINFO: read udp 10.244.0.2:39895->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1300977023883813396.2063248099453457359. HINFO: read udp 10.244.0.2:34552->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1300977023883813396.2063248099453457359. HINFO: read udp 10.244.0.2:32949->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1300977023883813396.2063248099453457359. HINFO: read udp 10.244.0.2:38054->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1300977023883813396.2063248099453457359. HINFO: read udp 10.244.0.2:51812->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1300977023883813396.2063248099453457359. HINFO: read udp 10.244.0.2:41650->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-889000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-889000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064
	                    minikube.k8s.io/name=running-upgrade-889000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_25T18_47_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 26 Oct 2024 01:47:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-889000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 26 Oct 2024 01:51:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 26 Oct 2024 01:47:36 +0000   Sat, 26 Oct 2024 01:47:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 26 Oct 2024 01:47:36 +0000   Sat, 26 Oct 2024 01:47:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 26 Oct 2024 01:47:36 +0000   Sat, 26 Oct 2024 01:47:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 26 Oct 2024 01:47:36 +0000   Sat, 26 Oct 2024 01:47:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-889000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d8bcc2994d343ce9f5078171294d334
	  System UUID:                8d8bcc2994d343ce9f5078171294d334
	  Boot ID:                    7c7cd58e-d9c8-4bed-a676-0ad07d3a3dfa
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-2b5xg                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 coredns-6d4b75cb6d-vkvfs                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m3s
	  kube-system                 etcd-running-upgrade-889000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-889000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-889000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-proxy-9mwcl                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-889000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  Starting                 4m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-889000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x4 over 4m22s)  kubelet          Node running-upgrade-889000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x3 over 4m22s)  kubelet          Node running-upgrade-889000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-889000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-889000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-889000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-889000 status is now: NodeReady
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-889000 event: Registered Node running-upgrade-889000 in Controller
	
	
	==> dmesg <==
	[  +1.673069] systemd-fstab-generator[882]: Ignoring "noauto" for root device
	[  +0.079331] systemd-fstab-generator[893]: Ignoring "noauto" for root device
	[  +0.079606] systemd-fstab-generator[904]: Ignoring "noauto" for root device
	[  +0.190143] systemd-fstab-generator[1053]: Ignoring "noauto" for root device
	[  +0.079584] systemd-fstab-generator[1064]: Ignoring "noauto" for root device
	[  +1.950751] systemd-fstab-generator[1294]: Ignoring "noauto" for root device
	[  +0.320011] kauditd_printk_skb: 92 callbacks suppressed
	[Oct26 01:43] systemd-fstab-generator[1932]: Ignoring "noauto" for root device
	[  +2.464783] systemd-fstab-generator[2207]: Ignoring "noauto" for root device
	[  +0.173809] systemd-fstab-generator[2247]: Ignoring "noauto" for root device
	[  +0.098785] systemd-fstab-generator[2258]: Ignoring "noauto" for root device
	[  +0.097859] systemd-fstab-generator[2271]: Ignoring "noauto" for root device
	[ +12.743778] kauditd_printk_skb: 8 callbacks suppressed
	[  +0.207458] systemd-fstab-generator[3015]: Ignoring "noauto" for root device
	[  +0.086265] systemd-fstab-generator[3028]: Ignoring "noauto" for root device
	[  +0.080700] systemd-fstab-generator[3039]: Ignoring "noauto" for root device
	[  +0.092066] systemd-fstab-generator[3053]: Ignoring "noauto" for root device
	[  +2.316995] systemd-fstab-generator[3204]: Ignoring "noauto" for root device
	[  +2.619848] systemd-fstab-generator[3553]: Ignoring "noauto" for root device
	[  +1.296823] systemd-fstab-generator[3696]: Ignoring "noauto" for root device
	[ +18.621521] kauditd_printk_skb: 68 callbacks suppressed
	[Oct26 01:47] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.348427] systemd-fstab-generator[11903]: Ignoring "noauto" for root device
	[  +5.641272] systemd-fstab-generator[12504]: Ignoring "noauto" for root device
	[  +0.458958] systemd-fstab-generator[12640]: Ignoring "noauto" for root device
	
	
	==> etcd [b4af9e497fc5] <==
	{"level":"info","ts":"2024-10-26T01:47:31.943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-10-26T01:47:31.944Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-10-26T01:47:31.950Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-26T01:47:31.951Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-26T01:47:31.951Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-26T01:47:31.951Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-26T01:47:31.951Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-26T01:47:32.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-26T01:47:32.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-26T01:47:32.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-10-26T01:47:32.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-10-26T01:47:32.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-26T01:47:32.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-10-26T01:47:32.741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-26T01:47:32.741Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-889000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-26T01:47:32.742Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-26T01:47:32.742Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-26T01:47:32.742Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-26T01:47:32.743Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-26T01:47:32.744Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-10-26T01:47:32.744Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-26T01:47:32.744Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-26T01:47:32.744Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-26T01:47:32.755Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-26T01:47:32.757Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 01:51:53 up 9 min,  0 users,  load average: 0.48, 0.46, 0.26
	Linux running-upgrade-889000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [9f50947853ee] <==
	I1026 01:47:33.898870       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1026 01:47:33.918988       1 controller.go:611] quota admission added evaluator for: namespaces
	I1026 01:47:33.960427       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1026 01:47:33.960444       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 01:47:33.960454       1 cache.go:39] Caches are synced for autoregister controller
	I1026 01:47:33.960545       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1026 01:47:33.961213       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1026 01:47:34.699373       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1026 01:47:34.866101       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 01:47:34.869106       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 01:47:34.869129       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 01:47:35.010461       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 01:47:35.020198       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 01:47:35.123820       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1026 01:47:35.125609       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1026 01:47:35.125952       1 controller.go:611] quota admission added evaluator for: endpoints
	I1026 01:47:35.127223       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 01:47:36.000960       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1026 01:47:36.421621       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1026 01:47:36.425584       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1026 01:47:36.432129       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1026 01:47:36.473271       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 01:47:49.623277       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1026 01:47:49.823619       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1026 01:47:50.302639       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [1e58ff0149c0] <==
	I1026 01:47:49.022006       1 shared_informer.go:262] Caches are synced for daemon sets
	I1026 01:47:49.022031       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1026 01:47:49.022067       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1026 01:47:49.022147       1 shared_informer.go:262] Caches are synced for cronjob
	I1026 01:47:49.022039       1 shared_informer.go:262] Caches are synced for persistent volume
	I1026 01:47:49.028433       1 shared_informer.go:262] Caches are synced for node
	I1026 01:47:49.028488       1 range_allocator.go:173] Starting range CIDR allocator
	I1026 01:47:49.028507       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1026 01:47:49.028534       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1026 01:47:49.030361       1 range_allocator.go:374] Set node running-upgrade-889000 PodCIDR to [10.244.0.0/24]
	I1026 01:47:49.048177       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1026 01:47:49.072053       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1026 01:47:49.074077       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I1026 01:47:49.074106       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1026 01:47:49.074117       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1026 01:47:49.125884       1 shared_informer.go:262] Caches are synced for resource quota
	I1026 01:47:49.125903       1 shared_informer.go:262] Caches are synced for resource quota
	I1026 01:47:49.271253       1 shared_informer.go:262] Caches are synced for attach detach
	I1026 01:47:49.624715       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1026 01:47:49.635469       1 shared_informer.go:262] Caches are synced for garbage collector
	I1026 01:47:49.651512       1 shared_informer.go:262] Caches are synced for garbage collector
	I1026 01:47:49.651548       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1026 01:47:49.826793       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9mwcl"
	I1026 01:47:50.023283       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-vkvfs"
	I1026 01:47:50.025611       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-2b5xg"
	
	
	==> kube-proxy [47e90c0d1ecf] <==
	I1026 01:47:50.289624       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1026 01:47:50.289658       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1026 01:47:50.289676       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1026 01:47:50.300583       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1026 01:47:50.300594       1 server_others.go:206] "Using iptables Proxier"
	I1026 01:47:50.300640       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1026 01:47:50.300814       1 server.go:661] "Version info" version="v1.24.1"
	I1026 01:47:50.300854       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 01:47:50.301203       1 config.go:317] "Starting service config controller"
	I1026 01:47:50.301220       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1026 01:47:50.301258       1 config.go:226] "Starting endpoint slice config controller"
	I1026 01:47:50.301264       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1026 01:47:50.301596       1 config.go:444] "Starting node config controller"
	I1026 01:47:50.301609       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1026 01:47:50.402770       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1026 01:47:50.402799       1 shared_informer.go:262] Caches are synced for service config
	I1026 01:47:50.402863       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [133f8fc5cb40] <==
	W1026 01:47:33.912050       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1026 01:47:33.912054       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1026 01:47:33.912066       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1026 01:47:33.912071       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1026 01:47:33.912088       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1026 01:47:33.912090       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1026 01:47:33.912112       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 01:47:33.912116       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1026 01:47:33.912131       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1026 01:47:33.912134       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1026 01:47:34.815132       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1026 01:47:34.815168       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1026 01:47:34.832257       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1026 01:47:34.832277       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1026 01:47:34.845102       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1026 01:47:34.845210       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1026 01:47:34.896423       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1026 01:47:34.896528       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1026 01:47:34.931011       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1026 01:47:34.931100       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 01:47:34.935026       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1026 01:47:34.935102       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1026 01:47:34.957483       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1026 01:47:34.957576       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1026 01:47:36.701608       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Sat 2024-10-26 01:42:39 UTC, ends at Sat 2024-10-26 01:51:53 UTC. --
	Oct 26 01:47:38 running-upgrade-889000 kubelet[12510]: E1026 01:47:38.650373   12510 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-889000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-889000"
	Oct 26 01:47:48 running-upgrade-889000 kubelet[12510]: I1026 01:47:48.941922   12510 topology_manager.go:200] "Topology Admit Handler"
	Oct 26 01:47:49 running-upgrade-889000 kubelet[12510]: I1026 01:47:49.059625   12510 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 26 01:47:49 running-upgrade-889000 kubelet[12510]: I1026 01:47:49.059682   12510 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e84f5042-104b-4b15-9fa9-7e809cebf964-tmp\") pod \"storage-provisioner\" (UID: \"e84f5042-104b-4b15-9fa9-7e809cebf964\") " pod="kube-system/storage-provisioner"
	Oct 26 01:47:49 running-upgrade-889000 kubelet[12510]: I1026 01:47:49.059826   12510 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2mj4\" (UniqueName: \"kubernetes.io/projected/e84f5042-104b-4b15-9fa9-7e809cebf964-kube-api-access-h2mj4\") pod \"storage-provisioner\" (UID: \"e84f5042-104b-4b15-9fa9-7e809cebf964\") " pod="kube-system/storage-provisioner"
	Oct 26 01:47:49 running-upgrade-889000 kubelet[12510]: I1026 01:47:49.060059   12510 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 26 01:47:49 running-upgrade-889000 kubelet[12510]: E1026 01:47:49.164456   12510 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 26 01:47:49 running-upgrade-889000 kubelet[12510]: E1026 01:47:49.164479   12510 projected.go:192] Error preparing data for projected volume kube-api-access-h2mj4 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Oct 26 01:47:49 running-upgrade-889000 kubelet[12510]: E1026 01:47:49.164520   12510 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/e84f5042-104b-4b15-9fa9-7e809cebf964-kube-api-access-h2mj4 podName:e84f5042-104b-4b15-9fa9-7e809cebf964 nodeName:}" failed. No retries permitted until 2024-10-26 01:47:49.664506979 +0000 UTC m=+13.257342941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h2mj4" (UniqueName: "kubernetes.io/projected/e84f5042-104b-4b15-9fa9-7e809cebf964-kube-api-access-h2mj4") pod "storage-provisioner" (UID: "e84f5042-104b-4b15-9fa9-7e809cebf964") : configmap "kube-root-ca.crt" not found
	Oct 26 01:47:49 running-upgrade-889000 kubelet[12510]: E1026 01:47:49.664634   12510 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 26 01:47:49 running-upgrade-889000 kubelet[12510]: E1026 01:47:49.664659   12510 projected.go:192] Error preparing data for projected volume kube-api-access-h2mj4 for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Oct 26 01:47:49 running-upgrade-889000 kubelet[12510]: E1026 01:47:49.664710   12510 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/e84f5042-104b-4b15-9fa9-7e809cebf964-kube-api-access-h2mj4 podName:e84f5042-104b-4b15-9fa9-7e809cebf964 nodeName:}" failed. No retries permitted until 2024-10-26 01:47:50.664700475 +0000 UTC m=+14.257536438 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-h2mj4" (UniqueName: "kubernetes.io/projected/e84f5042-104b-4b15-9fa9-7e809cebf964-kube-api-access-h2mj4") pod "storage-provisioner" (UID: "e84f5042-104b-4b15-9fa9-7e809cebf964") : configmap "kube-root-ca.crt" not found
	Oct 26 01:47:49 running-upgrade-889000 kubelet[12510]: I1026 01:47:49.829295   12510 topology_manager.go:200] "Topology Admit Handler"
	Oct 26 01:47:49 running-upgrade-889000 kubelet[12510]: I1026 01:47:49.966381   12510 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gmp2\" (UniqueName: \"kubernetes.io/projected/cec46555-f178-47d4-b1c9-e5cf795c8bda-kube-api-access-9gmp2\") pod \"kube-proxy-9mwcl\" (UID: \"cec46555-f178-47d4-b1c9-e5cf795c8bda\") " pod="kube-system/kube-proxy-9mwcl"
	Oct 26 01:47:49 running-upgrade-889000 kubelet[12510]: I1026 01:47:49.966611   12510 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cec46555-f178-47d4-b1c9-e5cf795c8bda-kube-proxy\") pod \"kube-proxy-9mwcl\" (UID: \"cec46555-f178-47d4-b1c9-e5cf795c8bda\") " pod="kube-system/kube-proxy-9mwcl"
	Oct 26 01:47:49 running-upgrade-889000 kubelet[12510]: I1026 01:47:49.966627   12510 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cec46555-f178-47d4-b1c9-e5cf795c8bda-xtables-lock\") pod \"kube-proxy-9mwcl\" (UID: \"cec46555-f178-47d4-b1c9-e5cf795c8bda\") " pod="kube-system/kube-proxy-9mwcl"
	Oct 26 01:47:49 running-upgrade-889000 kubelet[12510]: I1026 01:47:49.966638   12510 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cec46555-f178-47d4-b1c9-e5cf795c8bda-lib-modules\") pod \"kube-proxy-9mwcl\" (UID: \"cec46555-f178-47d4-b1c9-e5cf795c8bda\") " pod="kube-system/kube-proxy-9mwcl"
	Oct 26 01:47:50 running-upgrade-889000 kubelet[12510]: I1026 01:47:50.027371   12510 topology_manager.go:200] "Topology Admit Handler"
	Oct 26 01:47:50 running-upgrade-889000 kubelet[12510]: I1026 01:47:50.028244   12510 topology_manager.go:200] "Topology Admit Handler"
	Oct 26 01:47:50 running-upgrade-889000 kubelet[12510]: I1026 01:47:50.066741   12510 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55dhv\" (UniqueName: \"kubernetes.io/projected/aa54233d-b860-4f25-a5da-d4e9e1621706-kube-api-access-55dhv\") pod \"coredns-6d4b75cb6d-vkvfs\" (UID: \"aa54233d-b860-4f25-a5da-d4e9e1621706\") " pod="kube-system/coredns-6d4b75cb6d-vkvfs"
	Oct 26 01:47:50 running-upgrade-889000 kubelet[12510]: I1026 01:47:50.066837   12510 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05594803-59e1-4fed-91a4-a1157b4b7c4a-config-volume\") pod \"coredns-6d4b75cb6d-2b5xg\" (UID: \"05594803-59e1-4fed-91a4-a1157b4b7c4a\") " pod="kube-system/coredns-6d4b75cb6d-2b5xg"
	Oct 26 01:47:50 running-upgrade-889000 kubelet[12510]: I1026 01:47:50.066869   12510 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa54233d-b860-4f25-a5da-d4e9e1621706-config-volume\") pod \"coredns-6d4b75cb6d-vkvfs\" (UID: \"aa54233d-b860-4f25-a5da-d4e9e1621706\") " pod="kube-system/coredns-6d4b75cb6d-vkvfs"
	Oct 26 01:47:50 running-upgrade-889000 kubelet[12510]: I1026 01:47:50.066893   12510 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpp85\" (UniqueName: \"kubernetes.io/projected/05594803-59e1-4fed-91a4-a1157b4b7c4a-kube-api-access-hpp85\") pod \"coredns-6d4b75cb6d-2b5xg\" (UID: \"05594803-59e1-4fed-91a4-a1157b4b7c4a\") " pod="kube-system/coredns-6d4b75cb6d-2b5xg"
	Oct 26 01:51:27 running-upgrade-889000 kubelet[12510]: I1026 01:51:27.834072   12510 scope.go:110] "RemoveContainer" containerID="60d180d33f3339ef6b77165d20db3b4ca08900f2abb628802510fcd308139706"
	Oct 26 01:51:28 running-upgrade-889000 kubelet[12510]: I1026 01:51:28.855333   12510 scope.go:110] "RemoveContainer" containerID="6b3f7166e29dc25d4378c42d98b1c709977eef20320eacc12dbeec0638413e3c"
	
	
	==> storage-provisioner [4377af36f915] <==
	I1026 01:47:50.961497       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 01:47:50.968590       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 01:47:50.968659       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 01:47:50.972815       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 01:47:50.973125       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-889000_20b6171c-7b4b-43b1-9b4c-e2ff99e05253!
	I1026 01:47:50.973583       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7296b9cd-2c59-4142-bf8d-acbe11a6a25c", APIVersion:"v1", ResourceVersion:"382", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-889000_20b6171c-7b4b-43b1-9b4c-e2ff99e05253 became leader
	I1026 01:47:51.073858       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-889000_20b6171c-7b4b-43b1-9b4c-e2ff99e05253!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-889000 -n running-upgrade-889000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-889000 -n running-upgrade-889000: exit status 2 (15.702158208s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-889000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-889000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-889000
--- FAIL: TestRunningBinaryUpgrade (599.16s)

                                                
                                    
x
+
TestKubernetesUpgrade (19.11s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-507000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-507000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.811416s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-507000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-507000" primary control-plane node in "kubernetes-upgrade-507000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-507000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:45:10.576126    4714 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:45:10.576285    4714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:45:10.576289    4714 out.go:358] Setting ErrFile to fd 2...
	I1025 18:45:10.576291    4714 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:45:10.576433    4714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:45:10.577737    4714 out.go:352] Setting JSON to false
	I1025 18:45:10.596481    4714 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4481,"bootTime":1729902629,"procs":558,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:45:10.596562    4714 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:45:10.601296    4714 out.go:177] * [kubernetes-upgrade-507000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:45:10.609435    4714 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:45:10.609507    4714 notify.go:220] Checking for updates...
	I1025 18:45:10.615441    4714 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:45:10.618473    4714 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:45:10.621417    4714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:45:10.624403    4714 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:45:10.627429    4714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:45:10.630798    4714 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:45:10.630881    4714 config.go:182] Loaded profile config "running-upgrade-889000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:45:10.630931    4714 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:45:10.635389    4714 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:45:10.642397    4714 start.go:297] selected driver: qemu2
	I1025 18:45:10.642408    4714 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:45:10.642415    4714 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:45:10.644971    4714 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:45:10.647413    4714 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:45:10.650506    4714 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 18:45:10.650530    4714 cni.go:84] Creating CNI manager for ""
	I1025 18:45:10.650556    4714 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 18:45:10.650600    4714 start.go:340] cluster config:
	{Name:kubernetes-upgrade-507000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-507000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:45:10.655315    4714 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:45:10.663432    4714 out.go:177] * Starting "kubernetes-upgrade-507000" primary control-plane node in "kubernetes-upgrade-507000" cluster
	I1025 18:45:10.667234    4714 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1025 18:45:10.667251    4714 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1025 18:45:10.667261    4714 cache.go:56] Caching tarball of preloaded images
	I1025 18:45:10.667345    4714 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:45:10.667351    4714 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1025 18:45:10.667414    4714 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/kubernetes-upgrade-507000/config.json ...
	I1025 18:45:10.667425    4714 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/kubernetes-upgrade-507000/config.json: {Name:mk972101c52e71178a98a72cb79c14f709bd3ce4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:45:10.667738    4714 start.go:360] acquireMachinesLock for kubernetes-upgrade-507000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:45:10.667789    4714 start.go:364] duration metric: took 41.667µs to acquireMachinesLock for "kubernetes-upgrade-507000"
	I1025 18:45:10.667802    4714 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-507000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-507000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:45:10.667825    4714 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:45:10.676439    4714 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 18:45:10.704251    4714 start.go:159] libmachine.API.Create for "kubernetes-upgrade-507000" (driver="qemu2")
	I1025 18:45:10.704287    4714 client.go:168] LocalClient.Create starting
	I1025 18:45:10.704360    4714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:45:10.704400    4714 main.go:141] libmachine: Decoding PEM data...
	I1025 18:45:10.704411    4714 main.go:141] libmachine: Parsing certificate...
	I1025 18:45:10.704451    4714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:45:10.704480    4714 main.go:141] libmachine: Decoding PEM data...
	I1025 18:45:10.704490    4714 main.go:141] libmachine: Parsing certificate...
	I1025 18:45:10.704899    4714 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:45:10.868657    4714 main.go:141] libmachine: Creating SSH key...
	I1025 18:45:10.929063    4714 main.go:141] libmachine: Creating Disk image...
	I1025 18:45:10.929069    4714 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:45:10.929249    4714 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2
	I1025 18:45:10.939719    4714 main.go:141] libmachine: STDOUT: 
	I1025 18:45:10.939744    4714 main.go:141] libmachine: STDERR: 
	I1025 18:45:10.939802    4714 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2 +20000M
	I1025 18:45:10.948739    4714 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:45:10.948758    4714 main.go:141] libmachine: STDERR: 
	I1025 18:45:10.948781    4714 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2
	I1025 18:45:10.948792    4714 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:45:10.948810    4714 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:45:10.948844    4714 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:4d:14:54:46:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2
	I1025 18:45:10.950751    4714 main.go:141] libmachine: STDOUT: 
	I1025 18:45:10.950766    4714 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:45:10.950786    4714 client.go:171] duration metric: took 246.497416ms to LocalClient.Create
	I1025 18:45:12.952956    4714 start.go:128] duration metric: took 2.28514625s to createHost
	I1025 18:45:12.953034    4714 start.go:83] releasing machines lock for "kubernetes-upgrade-507000", held for 2.285282583s
	W1025 18:45:12.953143    4714 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:45:12.965598    4714 out.go:177] * Deleting "kubernetes-upgrade-507000" in qemu2 ...
	W1025 18:45:12.992320    4714 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:45:12.992347    4714 start.go:729] Will try again in 5 seconds ...
	I1025 18:45:17.992923    4714 start.go:360] acquireMachinesLock for kubernetes-upgrade-507000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:45:17.993344    4714 start.go:364] duration metric: took 335.75µs to acquireMachinesLock for "kubernetes-upgrade-507000"
	I1025 18:45:17.993400    4714 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-507000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-507000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:45:17.993649    4714 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:45:18.002150    4714 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 18:45:18.042230    4714 start.go:159] libmachine.API.Create for "kubernetes-upgrade-507000" (driver="qemu2")
	I1025 18:45:18.042295    4714 client.go:168] LocalClient.Create starting
	I1025 18:45:18.042495    4714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:45:18.042580    4714 main.go:141] libmachine: Decoding PEM data...
	I1025 18:45:18.042597    4714 main.go:141] libmachine: Parsing certificate...
	I1025 18:45:18.042658    4714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:45:18.042714    4714 main.go:141] libmachine: Decoding PEM data...
	I1025 18:45:18.042724    4714 main.go:141] libmachine: Parsing certificate...
	I1025 18:45:18.043318    4714 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:45:18.208093    4714 main.go:141] libmachine: Creating SSH key...
	I1025 18:45:18.296600    4714 main.go:141] libmachine: Creating Disk image...
	I1025 18:45:18.296608    4714 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:45:18.296805    4714 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2
	I1025 18:45:18.306989    4714 main.go:141] libmachine: STDOUT: 
	I1025 18:45:18.307012    4714 main.go:141] libmachine: STDERR: 
	I1025 18:45:18.307076    4714 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2 +20000M
	I1025 18:45:18.315655    4714 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:45:18.315672    4714 main.go:141] libmachine: STDERR: 
	I1025 18:45:18.315685    4714 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2
	I1025 18:45:18.315690    4714 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:45:18.315699    4714 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:45:18.315770    4714 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:85:a4:ef:c6:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2
	I1025 18:45:18.317640    4714 main.go:141] libmachine: STDOUT: 
	I1025 18:45:18.317656    4714 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:45:18.317669    4714 client.go:171] duration metric: took 275.362333ms to LocalClient.Create
	I1025 18:45:20.318601    4714 start.go:128] duration metric: took 2.324979334s to createHost
	I1025 18:45:20.318621    4714 start.go:83] releasing machines lock for "kubernetes-upgrade-507000", held for 2.3253095s
	W1025 18:45:20.318739    4714 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-507000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-507000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:45:20.327928    4714 out.go:201] 
	W1025 18:45:20.335063    4714 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:45:20.335070    4714 out.go:270] * 
	* 
	W1025 18:45:20.335655    4714 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:45:20.345999    4714 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-507000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-507000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-507000: (3.874570875s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-507000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-507000 status --format={{.Host}}: exit status 7 (65.754917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-507000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-507000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.192553917s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-507000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-507000" primary control-plane node in "kubernetes-upgrade-507000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-507000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-507000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:45:24.329691    4756 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:45:24.329857    4756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:45:24.329860    4756 out.go:358] Setting ErrFile to fd 2...
	I1025 18:45:24.329862    4756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:45:24.329988    4756 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:45:24.331108    4756 out.go:352] Setting JSON to false
	I1025 18:45:24.348859    4756 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4495,"bootTime":1729902629,"procs":559,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:45:24.348958    4756 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:45:24.354100    4756 out.go:177] * [kubernetes-upgrade-507000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:45:24.361882    4756 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:45:24.361945    4756 notify.go:220] Checking for updates...
	I1025 18:45:24.370010    4756 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:45:24.372999    4756 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:45:24.376024    4756 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:45:24.379028    4756 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:45:24.380324    4756 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:45:24.383346    4756 config.go:182] Loaded profile config "kubernetes-upgrade-507000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1025 18:45:24.383635    4756 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:45:24.388027    4756 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 18:45:24.393019    4756 start.go:297] selected driver: qemu2
	I1025 18:45:24.393027    4756 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-507000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-507000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:45:24.393081    4756 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:45:24.395583    4756 cni.go:84] Creating CNI manager for ""
	I1025 18:45:24.395616    4756 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:45:24.395641    4756 start.go:340] cluster config:
	{Name:kubernetes-upgrade-507000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-507000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:45:24.399900    4756 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:45:24.407955    4756 out.go:177] * Starting "kubernetes-upgrade-507000" primary control-plane node in "kubernetes-upgrade-507000" cluster
	I1025 18:45:24.412011    4756 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:45:24.412024    4756 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:45:24.412031    4756 cache.go:56] Caching tarball of preloaded images
	I1025 18:45:24.412102    4756 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:45:24.412107    4756 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:45:24.412154    4756 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/kubernetes-upgrade-507000/config.json ...
	I1025 18:45:24.412501    4756 start.go:360] acquireMachinesLock for kubernetes-upgrade-507000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:45:24.412547    4756 start.go:364] duration metric: took 40.5µs to acquireMachinesLock for "kubernetes-upgrade-507000"
	I1025 18:45:24.412556    4756 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:45:24.412561    4756 fix.go:54] fixHost starting: 
	I1025 18:45:24.412677    4756 fix.go:112] recreateIfNeeded on kubernetes-upgrade-507000: state=Stopped err=<nil>
	W1025 18:45:24.412684    4756 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:45:24.420018    4756 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-507000" ...
	I1025 18:45:24.424087    4756 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:45:24.424136    4756 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:85:a4:ef:c6:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2
	I1025 18:45:24.426170    4756 main.go:141] libmachine: STDOUT: 
	I1025 18:45:24.426190    4756 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:45:24.426224    4756 fix.go:56] duration metric: took 13.663125ms for fixHost
	I1025 18:45:24.426228    4756 start.go:83] releasing machines lock for "kubernetes-upgrade-507000", held for 13.676833ms
	W1025 18:45:24.426232    4756 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:45:24.426278    4756 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:45:24.426282    4756 start.go:729] Will try again in 5 seconds ...
	I1025 18:45:29.428552    4756 start.go:360] acquireMachinesLock for kubernetes-upgrade-507000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:45:29.429122    4756 start.go:364] duration metric: took 453.417µs to acquireMachinesLock for "kubernetes-upgrade-507000"
	I1025 18:45:29.429292    4756 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:45:29.429313    4756 fix.go:54] fixHost starting: 
	I1025 18:45:29.430012    4756 fix.go:112] recreateIfNeeded on kubernetes-upgrade-507000: state=Stopped err=<nil>
	W1025 18:45:29.430037    4756 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:45:29.439508    4756 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-507000" ...
	I1025 18:45:29.442454    4756 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:45:29.442718    4756 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:85:a4:ef:c6:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubernetes-upgrade-507000/disk.qcow2
	I1025 18:45:29.451940    4756 main.go:141] libmachine: STDOUT: 
	I1025 18:45:29.451993    4756 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:45:29.452073    4756 fix.go:56] duration metric: took 22.76375ms for fixHost
	I1025 18:45:29.452090    4756 start.go:83] releasing machines lock for "kubernetes-upgrade-507000", held for 22.94275ms
	W1025 18:45:29.452253    4756 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-507000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-507000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:45:29.460541    4756 out.go:201] 
	W1025 18:45:29.463488    4756 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:45:29.463538    4756 out.go:270] * 
	* 
	W1025 18:45:29.465131    4756 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:45:29.475467    4756 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-507000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-507000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-507000 version --output=json: exit status 1 (43.704625ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-507000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-10-25 18:45:29.534143 -0700 PDT m=+3770.146234126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-507000 -n kubernetes-upgrade-507000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-507000 -n kubernetes-upgrade-507000: exit status 7 (35.109625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-507000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-507000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-507000
--- FAIL: TestKubernetesUpgrade (19.11s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.11s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19868
- KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1675166331/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.11s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.98s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19868
- KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3215631096/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (575.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.725146634 start -p stopped-upgrade-473000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.725146634 start -p stopped-upgrade-473000 --memory=2200 --vm-driver=qemu2 : (42.321531042s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.725146634 -p stopped-upgrade-473000 stop
E1025 18:46:24.231025    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.725146634 -p stopped-upgrade-473000 stop: (12.12890875s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-473000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E1025 18:47:38.763517    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:47:55.660145    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
E1025 18:51:24.343093    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-473000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.279044916s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-473000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-473000" primary control-plane node in "stopped-upgrade-473000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-473000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:46:25.169326    4810 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:46:25.169505    4810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:46:25.169509    4810 out.go:358] Setting ErrFile to fd 2...
	I1025 18:46:25.169512    4810 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:46:25.169675    4810 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:46:25.170899    4810 out.go:352] Setting JSON to false
	I1025 18:46:25.191438    4810 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4556,"bootTime":1729902629,"procs":559,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:46:25.191516    4810 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:46:25.199246    4810 out.go:177] * [stopped-upgrade-473000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:46:25.208244    4810 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:46:25.208298    4810 notify.go:220] Checking for updates...
	I1025 18:46:25.215200    4810 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:46:25.219182    4810 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:46:25.222219    4810 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:46:25.225227    4810 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:46:25.228288    4810 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:46:25.231618    4810 config.go:182] Loaded profile config "stopped-upgrade-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:46:25.233143    4810 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1025 18:46:25.236274    4810 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:46:25.240254    4810 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 18:46:25.245225    4810 start.go:297] selected driver: qemu2
	I1025 18:46:25.245231    4810 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-473000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62543 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-473000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1025 18:46:25.245302    4810 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:46:25.248036    4810 cni.go:84] Creating CNI manager for ""
	I1025 18:46:25.248070    4810 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:46:25.248097    4810 start.go:340] cluster config:
	{Name:stopped-upgrade-473000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62543 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-473000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1025 18:46:25.248154    4810 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:46:25.255223    4810 out.go:177] * Starting "stopped-upgrade-473000" primary control-plane node in "stopped-upgrade-473000" cluster
	I1025 18:46:25.259226    4810 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1025 18:46:25.259243    4810 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1025 18:46:25.259250    4810 cache.go:56] Caching tarball of preloaded images
	I1025 18:46:25.259326    4810 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:46:25.259333    4810 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1025 18:46:25.259384    4810 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/config.json ...
	I1025 18:46:25.259770    4810 start.go:360] acquireMachinesLock for stopped-upgrade-473000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:46:25.259817    4810 start.go:364] duration metric: took 40.417µs to acquireMachinesLock for "stopped-upgrade-473000"
	I1025 18:46:25.259825    4810 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:46:25.259830    4810 fix.go:54] fixHost starting: 
	I1025 18:46:25.259942    4810 fix.go:112] recreateIfNeeded on stopped-upgrade-473000: state=Stopped err=<nil>
	W1025 18:46:25.259951    4810 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:46:25.264198    4810 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-473000" ...
	I1025 18:46:25.272037    4810 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:46:25.272115    4810 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/qemu.pid -nic user,model=virtio,hostfwd=tcp::62508-:22,hostfwd=tcp::62509-:2376,hostname=stopped-upgrade-473000 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/disk.qcow2
	I1025 18:46:25.319244    4810 main.go:141] libmachine: STDOUT: 
	I1025 18:46:25.319284    4810 main.go:141] libmachine: STDERR: 
	I1025 18:46:25.319292    4810 main.go:141] libmachine: Waiting for VM to start (ssh -p 62508 docker@127.0.0.1)...
	I1025 18:46:45.217687    4810 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/config.json ...
	I1025 18:46:45.218502    4810 machine.go:93] provisionDockerMachine start ...
	I1025 18:46:45.218759    4810 main.go:141] libmachine: Using SSH client type: native
	I1025 18:46:45.219348    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033f65f0] 0x1033f8e30 <nil>  [] 0s} localhost 62508 <nil> <nil>}
	I1025 18:46:45.219362    4810 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 18:46:45.314922    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1025 18:46:45.314958    4810 buildroot.go:166] provisioning hostname "stopped-upgrade-473000"
	I1025 18:46:45.315111    4810 main.go:141] libmachine: Using SSH client type: native
	I1025 18:46:45.315341    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033f65f0] 0x1033f8e30 <nil>  [] 0s} localhost 62508 <nil> <nil>}
	I1025 18:46:45.315353    4810 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-473000 && echo "stopped-upgrade-473000" | sudo tee /etc/hostname
	I1025 18:46:45.403086    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-473000
	
	I1025 18:46:45.403172    4810 main.go:141] libmachine: Using SSH client type: native
	I1025 18:46:45.403313    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033f65f0] 0x1033f8e30 <nil>  [] 0s} localhost 62508 <nil> <nil>}
	I1025 18:46:45.403341    4810 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-473000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-473000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-473000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 18:46:45.482748    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 18:46:45.482760    4810 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19868-1112/.minikube CaCertPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19868-1112/.minikube}
	I1025 18:46:45.482780    4810 buildroot.go:174] setting up certificates
	I1025 18:46:45.482785    4810 provision.go:84] configureAuth start
	I1025 18:46:45.482792    4810 provision.go:143] copyHostCerts
	I1025 18:46:45.482875    4810 exec_runner.go:144] found /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.pem, removing ...
	I1025 18:46:45.482882    4810 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.pem
	I1025 18:46:45.483005    4810 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.pem (1082 bytes)
	I1025 18:46:45.483219    4810 exec_runner.go:144] found /Users/jenkins/minikube-integration/19868-1112/.minikube/cert.pem, removing ...
	I1025 18:46:45.483224    4810 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19868-1112/.minikube/cert.pem
	I1025 18:46:45.483287    4810 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19868-1112/.minikube/cert.pem (1123 bytes)
	I1025 18:46:45.483424    4810 exec_runner.go:144] found /Users/jenkins/minikube-integration/19868-1112/.minikube/key.pem, removing ...
	I1025 18:46:45.483429    4810 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19868-1112/.minikube/key.pem
	I1025 18:46:45.483486    4810 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19868-1112/.minikube/key.pem (1675 bytes)
	I1025 18:46:45.483587    4810 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-473000 san=[127.0.0.1 localhost minikube stopped-upgrade-473000]
	I1025 18:46:45.632215    4810 provision.go:177] copyRemoteCerts
	I1025 18:46:45.632282    4810 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 18:46:45.632291    4810 sshutil.go:53] new ssh client: &{IP:localhost Port:62508 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/id_rsa Username:docker}
	I1025 18:46:45.671274    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 18:46:45.678738    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 18:46:45.685690    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 18:46:45.692463    4810 provision.go:87] duration metric: took 209.674166ms to configureAuth
	I1025 18:46:45.692473    4810 buildroot.go:189] setting minikube options for container-runtime
	I1025 18:46:45.692586    4810 config.go:182] Loaded profile config "stopped-upgrade-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:46:45.692640    4810 main.go:141] libmachine: Using SSH client type: native
	I1025 18:46:45.692735    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033f65f0] 0x1033f8e30 <nil>  [] 0s} localhost 62508 <nil> <nil>}
	I1025 18:46:45.692758    4810 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1025 18:46:45.765557    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1025 18:46:45.765565    4810 buildroot.go:70] root file system type: tmpfs
	I1025 18:46:45.765630    4810 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1025 18:46:45.765708    4810 main.go:141] libmachine: Using SSH client type: native
	I1025 18:46:45.765820    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033f65f0] 0x1033f8e30 <nil>  [] 0s} localhost 62508 <nil> <nil>}
	I1025 18:46:45.765853    4810 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1025 18:46:45.841589    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1025 18:46:45.841651    4810 main.go:141] libmachine: Using SSH client type: native
	I1025 18:46:45.841757    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033f65f0] 0x1033f8e30 <nil>  [] 0s} localhost 62508 <nil> <nil>}
	I1025 18:46:45.841766    4810 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1025 18:46:46.233044    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1025 18:46:46.233058    4810 machine.go:96] duration metric: took 1.014568583s to provisionDockerMachine
	I1025 18:46:46.233065    4810 start.go:293] postStartSetup for "stopped-upgrade-473000" (driver="qemu2")
	I1025 18:46:46.233072    4810 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 18:46:46.233139    4810 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 18:46:46.233148    4810 sshutil.go:53] new ssh client: &{IP:localhost Port:62508 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/id_rsa Username:docker}
	I1025 18:46:46.276589    4810 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 18:46:46.278167    4810 info.go:137] Remote host: Buildroot 2021.02.12
	I1025 18:46:46.278175    4810 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19868-1112/.minikube/addons for local assets ...
	I1025 18:46:46.278276    4810 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19868-1112/.minikube/files for local assets ...
	I1025 18:46:46.278425    4810 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19868-1112/.minikube/files/etc/ssl/certs/16722.pem -> 16722.pem in /etc/ssl/certs
	I1025 18:46:46.278592    4810 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 18:46:46.281483    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/files/etc/ssl/certs/16722.pem --> /etc/ssl/certs/16722.pem (1708 bytes)
	I1025 18:46:46.288199    4810 start.go:296] duration metric: took 55.129125ms for postStartSetup
	I1025 18:46:46.288215    4810 fix.go:56] duration metric: took 21.028823458s for fixHost
	I1025 18:46:46.288266    4810 main.go:141] libmachine: Using SSH client type: native
	I1025 18:46:46.288364    4810 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1033f65f0] 0x1033f8e30 <nil>  [] 0s} localhost 62508 <nil> <nil>}
	I1025 18:46:46.288375    4810 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1025 18:46:46.364259    4810 main.go:141] libmachine: SSH cmd err, output: <nil>: 1729907206.240625462
	
	I1025 18:46:46.364268    4810 fix.go:216] guest clock: 1729907206.240625462
	I1025 18:46:46.364272    4810 fix.go:229] Guest: 2024-10-25 18:46:46.240625462 -0700 PDT Remote: 2024-10-25 18:46:46.288216 -0700 PDT m=+21.150621210 (delta=-47.590538ms)
	I1025 18:46:46.364284    4810 fix.go:200] guest clock delta is within tolerance: -47.590538ms
	I1025 18:46:46.364287    4810 start.go:83] releasing machines lock for "stopped-upgrade-473000", held for 21.104904875s
	I1025 18:46:46.364371    4810 ssh_runner.go:195] Run: cat /version.json
	I1025 18:46:46.364381    4810 sshutil.go:53] new ssh client: &{IP:localhost Port:62508 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/id_rsa Username:docker}
	I1025 18:46:46.364390    4810 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 18:46:46.364408    4810 sshutil.go:53] new ssh client: &{IP:localhost Port:62508 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/id_rsa Username:docker}
	W1025 18:46:46.364892    4810 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:62652->127.0.0.1:62508: write: broken pipe
	I1025 18:46:46.364913    4810 retry.go:31] will retry after 308.163959ms: ssh: handshake failed: write tcp 127.0.0.1:62652->127.0.0.1:62508: write: broken pipe
	W1025 18:46:46.718938    4810 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1025 18:46:46.719056    4810 ssh_runner.go:195] Run: systemctl --version
	I1025 18:46:46.723161    4810 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 18:46:46.725718    4810 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 18:46:46.725783    4810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1025 18:46:46.730165    4810 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1025 18:46:46.736270    4810 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1025 18:46:46.736280    4810 start.go:495] detecting cgroup driver to use...
	I1025 18:46:46.736385    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:46:46.744612    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1025 18:46:46.748054    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1025 18:46:46.751521    4810 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1025 18:46:46.751550    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1025 18:46:46.754927    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:46:46.758335    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1025 18:46:46.761172    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1025 18:46:46.763977    4810 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 18:46:46.767281    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1025 18:46:46.770615    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1025 18:46:46.773618    4810 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1025 18:46:46.776470    4810 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 18:46:46.779513    4810 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 18:46:46.782626    4810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:46:46.870978    4810 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1025 18:46:46.877138    4810 start.go:495] detecting cgroup driver to use...
	I1025 18:46:46.877223    4810 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1025 18:46:46.883073    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 18:46:46.888097    4810 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 18:46:46.894779    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 18:46:46.899366    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 18:46:46.904197    4810 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1025 18:46:46.951438    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1025 18:46:46.956694    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 18:46:46.962060    4810 ssh_runner.go:195] Run: which cri-dockerd
	I1025 18:46:46.963255    4810 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1025 18:46:46.966333    4810 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1025 18:46:46.971411    4810 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1025 18:46:47.050445    4810 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1025 18:46:47.113608    4810 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1025 18:46:47.113678    4810 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1025 18:46:47.119111    4810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:46:47.197395    4810 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:46:48.345536    4810 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.148147583s)
	I1025 18:46:48.345616    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1025 18:46:48.350163    4810 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1025 18:46:48.356351    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1025 18:46:48.361341    4810 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1025 18:46:48.438941    4810 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1025 18:46:48.512687    4810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:46:48.577704    4810 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1025 18:46:48.584255    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1025 18:46:48.589347    4810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:46:48.654206    4810 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1025 18:46:48.692723    4810 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1025 18:46:48.693540    4810 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1025 18:46:48.695432    4810 start.go:563] Will wait 60s for crictl version
	I1025 18:46:48.695478    4810 ssh_runner.go:195] Run: which crictl
	I1025 18:46:48.696693    4810 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1025 18:46:48.712080    4810 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1025 18:46:48.712157    4810 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:46:48.729662    4810 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1025 18:46:48.749877    4810 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1025 18:46:48.749962    4810 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1025 18:46:48.751188    4810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:46:48.754976    4810 kubeadm.go:883] updating cluster {Name:stopped-upgrade-473000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62543 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-473000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1025 18:46:48.755024    4810 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1025 18:46:48.755072    4810 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:46:48.765230    4810 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 18:46:48.765238    4810 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1025 18:46:48.765295    4810 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 18:46:48.768368    4810 ssh_runner.go:195] Run: which lz4
	I1025 18:46:48.769665    4810 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1025 18:46:48.770944    4810 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1025 18:46:48.770955    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1025 18:46:49.781534    4810 docker.go:653] duration metric: took 1.011945042s to copy over tarball
	I1025 18:46:49.781624    4810 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1025 18:46:50.973874    4810 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.192259916s)
	I1025 18:46:50.973889    4810 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1025 18:46:50.989872    4810 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1025 18:46:50.992768    4810 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1025 18:46:50.997689    4810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:46:51.083002    4810 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1025 18:46:52.594428    4810 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.511442458s)
	I1025 18:46:52.594533    4810 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1025 18:46:52.605339    4810 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1025 18:46:52.605349    4810 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1025 18:46:52.605353    4810 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 18:46:52.610415    4810 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:46:52.612114    4810 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1025 18:46:52.613730    4810 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1025 18:46:52.614231    4810 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:46:52.616402    4810 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1025 18:46:52.616508    4810 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 18:46:52.617867    4810 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1025 18:46:52.618170    4810 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1025 18:46:52.619337    4810 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1025 18:46:52.619449    4810 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 18:46:52.620397    4810 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1025 18:46:52.620979    4810 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1025 18:46:52.621737    4810 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1025 18:46:52.622074    4810 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 18:46:52.622875    4810 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1025 18:46:52.623907    4810 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 18:46:53.214063    4810 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1025 18:46:53.214117    4810 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1025 18:46:53.234269    4810 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1025 18:46:53.234314    4810 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1025 18:46:53.234390    4810 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1025 18:46:53.234967    4810 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1025 18:46:53.234983    4810 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1025 18:46:53.235021    4810 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1025 18:46:53.246219    4810 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1025 18:46:53.249170    4810 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1025 18:46:53.268166    4810 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1025 18:46:53.271082    4810 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 18:46:53.279010    4810 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1025 18:46:53.279033    4810 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1025 18:46:53.279100    4810 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1025 18:46:53.294433    4810 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1025 18:46:53.294565    4810 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1025 18:46:53.294581    4810 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 18:46:53.294639    4810 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1025 18:46:53.304190    4810 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1025 18:46:53.362442    4810 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1025 18:46:53.373836    4810 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1025 18:46:53.373858    4810 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1025 18:46:53.373918    4810 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1025 18:46:53.383808    4810 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1025 18:46:53.386799    4810 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1025 18:46:53.396709    4810 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1025 18:46:53.396732    4810 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1025 18:46:53.396792    4810 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1025 18:46:53.406729    4810 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1025 18:46:53.406863    4810 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1025 18:46:53.408383    4810 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1025 18:46:53.408394    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1025 18:46:53.416692    4810 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1025 18:46:53.416701    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1025 18:46:53.441979    4810 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1025 18:46:53.465930    4810 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1025 18:46:53.466083    4810 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1025 18:46:53.476440    4810 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1025 18:46:53.476462    4810 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1025 18:46:53.476537    4810 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1025 18:46:53.486423    4810 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1025 18:46:53.486588    4810 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1025 18:46:53.488066    4810 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1025 18:46:53.488077    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1025 18:46:53.527640    4810 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1025 18:46:53.527653    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1025 18:46:53.566988    4810 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W1025 18:46:53.568441    4810 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1025 18:46:53.568551    4810 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:46:53.579843    4810 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1025 18:46:53.579868    4810 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:46:53.579931    4810 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:46:53.593675    4810 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1025 18:46:53.593814    4810 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1025 18:46:53.595286    4810 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1025 18:46:53.595301    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1025 18:46:53.629005    4810 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1025 18:46:53.629019    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1025 18:46:53.871882    4810 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1025 18:46:53.871919    4810 cache_images.go:92] duration metric: took 1.266585334s to LoadCachedImages
	W1025 18:46:53.871960    4810 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1025 18:46:53.871965    4810 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1025 18:46:53.872023    4810 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-473000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-473000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 18:46:53.872097    4810 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1025 18:46:53.885948    4810 cni.go:84] Creating CNI manager for ""
	I1025 18:46:53.885969    4810 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:46:53.885978    4810 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1025 18:46:53.885991    4810 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-473000 NodeName:stopped-upgrade-473000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 18:46:53.886066    4810 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-473000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 18:46:53.886140    4810 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1025 18:46:53.889048    4810 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 18:46:53.889086    4810 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 18:46:53.891664    4810 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1025 18:46:53.896953    4810 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 18:46:53.902093    4810 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1025 18:46:53.907589    4810 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1025 18:46:53.908881    4810 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 18:46:53.912316    4810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:46:53.993496    4810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 18:46:54.001155    4810 certs.go:68] Setting up /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000 for IP: 10.0.2.15
	I1025 18:46:54.001168    4810 certs.go:194] generating shared ca certs ...
	I1025 18:46:54.001176    4810 certs.go:226] acquiring lock for ca certs: {Name:mk4d96eff7eec2b0b424f4d9808345f1ae37fa52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:46:54.001372    4810 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.key
	I1025 18:46:54.002156    4810 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/proxy-client-ca.key
	I1025 18:46:54.002168    4810 certs.go:256] generating profile certs ...
	I1025 18:46:54.002457    4810 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/client.key
	I1025 18:46:54.002480    4810 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.key.3dec5c91
	I1025 18:46:54.002496    4810 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.crt.3dec5c91 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1025 18:46:54.053224    4810 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.crt.3dec5c91 ...
	I1025 18:46:54.053237    4810 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.crt.3dec5c91: {Name:mk05743962903270bdc048d28ab3d3d2206b4886 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:46:54.053528    4810 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.key.3dec5c91 ...
	I1025 18:46:54.053533    4810 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.key.3dec5c91: {Name:mk7ebfc7b0c4a484c3f5b41bb12ac54c0b953481 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:46:54.053689    4810 certs.go:381] copying /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.crt.3dec5c91 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.crt
	I1025 18:46:54.053823    4810 certs.go:385] copying /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.key.3dec5c91 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.key
	I1025 18:46:54.054119    4810 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/proxy-client.key
	I1025 18:46:54.054290    4810 certs.go:484] found cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/1672.pem (1338 bytes)
	W1025 18:46:54.054436    4810 certs.go:480] ignoring /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/1672_empty.pem, impossibly tiny 0 bytes
	I1025 18:46:54.054442    4810 certs.go:484] found cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 18:46:54.054466    4810 certs.go:484] found cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem (1082 bytes)
	I1025 18:46:54.054486    4810 certs.go:484] found cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem (1123 bytes)
	I1025 18:46:54.054507    4810 certs.go:484] found cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/key.pem (1675 bytes)
	I1025 18:46:54.054551    4810 certs.go:484] found cert: /Users/jenkins/minikube-integration/19868-1112/.minikube/files/etc/ssl/certs/16722.pem (1708 bytes)
	I1025 18:46:54.054888    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 18:46:54.062438    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 18:46:54.069227    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 18:46:54.075820    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1025 18:46:54.083419    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 18:46:54.090782    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 18:46:54.097966    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 18:46:54.104743    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 18:46:54.111400    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/files/etc/ssl/certs/16722.pem --> /usr/share/ca-certificates/16722.pem (1708 bytes)
	I1025 18:46:54.118069    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 18:46:54.124983    4810 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/1672.pem --> /usr/share/ca-certificates/1672.pem (1338 bytes)
	I1025 18:46:54.131668    4810 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 18:46:54.137401    4810 ssh_runner.go:195] Run: openssl version
	I1025 18:46:54.139492    4810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16722.pem && ln -fs /usr/share/ca-certificates/16722.pem /etc/ssl/certs/16722.pem"
	I1025 18:46:54.143208    4810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16722.pem
	I1025 18:46:54.144603    4810 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 00:50 /usr/share/ca-certificates/16722.pem
	I1025 18:46:54.144639    4810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16722.pem
	I1025 18:46:54.146285    4810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16722.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 18:46:54.148993    4810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 18:46:54.151863    4810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:46:54.153350    4810 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 00:43 /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:46:54.153379    4810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 18:46:54.155183    4810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 18:46:54.158562    4810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1672.pem && ln -fs /usr/share/ca-certificates/1672.pem /etc/ssl/certs/1672.pem"
	I1025 18:46:54.161527    4810 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1672.pem
	I1025 18:46:54.162768    4810 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 00:50 /usr/share/ca-certificates/1672.pem
	I1025 18:46:54.162798    4810 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1672.pem
	I1025 18:46:54.164661    4810 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1672.pem /etc/ssl/certs/51391683.0"
	I1025 18:46:54.167748    4810 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 18:46:54.169114    4810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 18:46:54.171098    4810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 18:46:54.173075    4810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 18:46:54.174968    4810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 18:46:54.177057    4810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 18:46:54.178730    4810 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 18:46:54.180459    4810 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-473000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:62543 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-473000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1025 18:46:54.180537    4810 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:46:54.190309    4810 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 18:46:54.193241    4810 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1025 18:46:54.193250    4810 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1025 18:46:54.193284    4810 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 18:46:54.195972    4810 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 18:46:54.196425    4810 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-473000" does not appear in /Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:46:54.196541    4810 kubeconfig.go:62] /Users/jenkins/minikube-integration/19868-1112/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-473000" cluster setting kubeconfig missing "stopped-upgrade-473000" context setting]
	I1025 18:46:54.196771    4810 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/kubeconfig: {Name:mk88d1ac601cc80b64027f8557b82969027e8e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:46:54.197230    4810 kapi.go:59] client config for stopped-upgrade-473000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/client.key", CAFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104e52680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:46:54.197733    4810 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 18:46:54.200290    4810 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-473000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1025 18:46:54.200295    4810 kubeadm.go:1160] stopping kube-system containers ...
	I1025 18:46:54.200335    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1025 18:46:54.211060    4810 docker.go:483] Stopping containers: [ca283390d210 451202c4a948 11b566bdf60e 5f90e347d427 1b9369654c64 bf8dc2f49a56 dfa41dbae324 6c4b901e85f8]
	I1025 18:46:54.211132    4810 ssh_runner.go:195] Run: docker stop ca283390d210 451202c4a948 11b566bdf60e 5f90e347d427 1b9369654c64 bf8dc2f49a56 dfa41dbae324 6c4b901e85f8
	I1025 18:46:54.225032    4810 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 18:46:54.230560    4810 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:46:54.233398    4810 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:46:54.233404    4810 kubeadm.go:157] found existing configuration files:
	
	I1025 18:46:54.233435    4810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/admin.conf
	I1025 18:46:54.235890    4810 kubeadm.go:163] "https://control-plane.minikube.internal:62543" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 18:46:54.235921    4810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 18:46:54.238881    4810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/kubelet.conf
	I1025 18:46:54.241618    4810 kubeadm.go:163] "https://control-plane.minikube.internal:62543" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 18:46:54.241654    4810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 18:46:54.244233    4810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/controller-manager.conf
	I1025 18:46:54.246956    4810 kubeadm.go:163] "https://control-plane.minikube.internal:62543" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 18:46:54.246981    4810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 18:46:54.249869    4810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/scheduler.conf
	I1025 18:46:54.252331    4810 kubeadm.go:163] "https://control-plane.minikube.internal:62543" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 18:46:54.252358    4810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 18:46:54.255374    4810 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:46:54.258577    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:46:54.281272    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:46:54.680591    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:46:54.808945    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:46:54.838541    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 18:46:54.861209    4810 api_server.go:52] waiting for apiserver process to appear ...
	I1025 18:46:54.861299    4810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:46:55.363461    4810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:46:55.863164    4810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:46:55.867207    4810 api_server.go:72] duration metric: took 1.006020333s to wait for apiserver process to appear ...
	I1025 18:46:55.867216    4810 api_server.go:88] waiting for apiserver healthz status ...
	I1025 18:46:55.867229    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:00.869301    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:00.869404    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:05.977435    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:05.977487    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:10.978624    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:10.978731    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:15.980347    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:15.980444    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:20.982470    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:20.982571    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:25.983845    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:25.983928    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:30.985438    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:30.985462    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:35.987770    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:35.987793    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:40.990144    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:40.990183    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:45.992634    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:45.992663    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:50.995029    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:50.995049    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:47:55.997399    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:47:55.997553    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:47:56.009633    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:47:56.009728    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:47:56.020481    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:47:56.020559    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:47:56.031161    4810 logs.go:282] 0 containers: []
	W1025 18:47:56.031184    4810 logs.go:284] No container was found matching "coredns"
	I1025 18:47:56.031248    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:47:56.041718    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:47:56.041808    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:47:56.052150    4810 logs.go:282] 0 containers: []
	W1025 18:47:56.052162    4810 logs.go:284] No container was found matching "kube-proxy"
	I1025 18:47:56.052235    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:47:56.062449    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:47:56.062529    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:47:56.072427    4810 logs.go:282] 0 containers: []
	W1025 18:47:56.072438    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:47:56.072500    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:47:56.081716    4810 logs.go:282] 0 containers: []
	W1025 18:47:56.081729    4810 logs.go:284] No container was found matching "storage-provisioner"
	I1025 18:47:56.081735    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:47:56.081741    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:47:56.094005    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:47:56.094016    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:47:56.107430    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:47:56.107439    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:47:56.122548    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:47:56.122559    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:47:56.151977    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:47:56.151990    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:47:56.170290    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:47:56.170301    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:47:56.182239    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:47:56.182253    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:47:56.186851    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:47:56.186860    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:47:56.291554    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:47:56.291568    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:47:56.307335    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:47:56.307345    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:47:56.325430    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:47:56.325440    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:47:56.351135    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:47:56.351147    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:47:56.382437    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:47:56.382452    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:47:58.899065    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:03.901473    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:03.901645    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:03.915101    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:48:03.915175    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:03.926394    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:48:03.926462    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:03.936803    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:48:03.936898    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:03.947534    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:48:03.947619    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:03.957844    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:48:03.957911    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:03.972977    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:48:03.973064    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:03.985600    4810 logs.go:282] 0 containers: []
	W1025 18:48:03.985611    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:03.985681    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:03.996828    4810 logs.go:282] 1 containers: [d67f7969a5df]
	I1025 18:48:03.996845    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:48:03.996851    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:48:04.009890    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:48:04.009901    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:48:04.021131    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:48:04.021142    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:48:04.036191    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:48:04.036200    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:48:04.047641    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:48:04.047651    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:48:04.058557    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:48:04.058570    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:04.070424    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:04.070440    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:04.099875    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:04.099883    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:04.137758    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:48:04.137769    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:48:04.151516    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:48:04.151527    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:48:04.166048    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:48:04.166058    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:48:04.189102    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:48:04.189112    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:48:04.202971    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:04.202983    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:04.207205    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:48:04.207213    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:48:04.225567    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:48:04.225578    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:48:04.242462    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:04.242471    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:06.770676    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:11.773141    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:11.773417    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:11.799433    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:48:11.799556    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:11.816344    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:48:11.816443    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:11.828968    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:48:11.829049    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:11.840184    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:48:11.840268    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:11.850536    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:48:11.850614    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:11.861075    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:48:11.861149    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:11.871554    4810 logs.go:282] 0 containers: []
	W1025 18:48:11.871568    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:11.871639    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:11.881692    4810 logs.go:282] 1 containers: [d67f7969a5df]
	I1025 18:48:11.881709    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:48:11.881714    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:48:11.892663    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:48:11.892675    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:11.906013    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:48:11.906026    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:48:11.929857    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:48:11.929872    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:48:11.949375    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:48:11.949386    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:48:11.963794    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:11.963804    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:11.989388    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:48:11.989397    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:48:12.003020    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:48:12.003031    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:48:12.020759    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:48:12.020771    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:48:12.032092    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:48:12.032103    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:48:12.044120    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:48:12.044135    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:48:12.060416    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:12.060426    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:12.095950    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:48:12.095961    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:48:12.109720    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:48:12.109732    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:48:12.127153    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:12.127167    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:12.155415    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:12.155423    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:14.661915    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:19.664417    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:19.664619    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:19.683921    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:48:19.684011    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:19.711475    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:48:19.711557    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:19.733555    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:48:19.733639    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:19.744202    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:48:19.744284    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:19.756578    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:48:19.756655    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:19.767008    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:48:19.767078    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:19.776945    4810 logs.go:282] 0 containers: []
	W1025 18:48:19.776957    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:19.777034    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:19.787584    4810 logs.go:282] 1 containers: [d67f7969a5df]
	I1025 18:48:19.787602    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:48:19.787608    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:48:19.798869    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:48:19.798879    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:48:19.821917    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:48:19.821928    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:48:19.844441    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:48:19.844455    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:48:19.856140    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:48:19.856152    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:48:19.874140    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:48:19.874150    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:48:19.889161    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:48:19.889177    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:48:19.904857    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:48:19.904870    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:48:19.922549    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:19.922559    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:19.949509    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:48:19.949517    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:19.961344    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:48:19.961354    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:48:19.974940    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:19.974952    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:19.979399    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:19.979409    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:20.020890    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:48:20.020901    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:48:20.035494    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:48:20.035504    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:48:20.053045    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:20.053059    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:22.585026    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:27.587802    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:27.588101    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:27.612844    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:48:27.612954    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:27.629022    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:48:27.629115    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:27.642155    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:48:27.642235    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:27.653490    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:48:27.653567    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:27.663828    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:48:27.663909    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:27.675978    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:48:27.676060    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:27.686077    4810 logs.go:282] 0 containers: []
	W1025 18:48:27.686093    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:27.686163    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:27.696304    4810 logs.go:282] 1 containers: [d67f7969a5df]
	I1025 18:48:27.696322    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:48:27.696328    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:48:27.707337    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:48:27.707349    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:48:27.718766    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:48:27.718778    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:48:27.733998    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:27.734011    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:27.761049    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:48:27.761057    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:48:27.775007    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:48:27.775023    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:48:27.792014    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:48:27.792024    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:48:27.806081    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:48:27.806091    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:48:27.830202    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:48:27.830212    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:48:27.847273    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:48:27.847284    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:27.860331    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:27.860346    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:27.896357    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:48:27.896368    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:48:27.910185    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:48:27.910198    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:48:27.924278    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:48:27.924291    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:48:27.936083    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:27.936096    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:27.964015    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:27.964023    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:30.469556    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:35.472090    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:35.472494    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:35.499817    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:48:35.499961    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:35.519859    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:48:35.519948    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:35.533501    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:48:35.533591    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:35.544808    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:48:35.544886    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:35.555348    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:48:35.555422    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:35.565854    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:48:35.565922    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:35.575944    4810 logs.go:282] 0 containers: []
	W1025 18:48:35.575956    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:35.576019    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:35.586414    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:48:35.586435    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:48:35.586443    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:48:35.600103    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:48:35.600115    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:48:35.611183    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:48:35.611197    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:48:35.622645    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:48:35.622657    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:35.635060    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:48:35.635074    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:48:35.648635    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:48:35.648648    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:48:35.660004    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:48:35.660015    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:48:35.682903    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:35.682914    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:35.708897    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:48:35.708903    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:48:35.730065    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:48:35.730076    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:48:35.749579    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:48:35.749589    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:48:35.760854    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:48:35.760866    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:48:35.778428    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:48:35.778443    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:48:35.797871    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:35.797885    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:35.828260    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:35.828268    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:35.862326    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:48:35.862341    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:48:35.875369    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:35.875379    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:38.381717    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:43.384213    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:43.384453    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:43.407822    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:48:43.407955    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:43.424556    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:48:43.424659    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:43.438052    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:48:43.438136    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:43.449383    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:48:43.449467    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:43.459948    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:48:43.460019    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:43.474747    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:48:43.474814    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:43.485313    4810 logs.go:282] 0 containers: []
	W1025 18:48:43.485326    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:43.485381    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:43.496085    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:48:43.496103    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:48:43.496108    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:48:43.519400    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:48:43.519410    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:48:43.534231    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:48:43.534242    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:48:43.552093    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:43.552105    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:43.581036    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:48:43.581047    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:48:43.597850    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:48:43.597860    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:48:43.615214    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:43.615224    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:43.639774    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:48:43.639781    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:43.655795    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:43.655809    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:43.691963    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:48:43.691974    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:48:43.708883    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:48:43.708894    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:48:43.721393    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:48:43.721405    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:48:43.733063    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:48:43.733075    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:48:43.745822    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:48:43.745835    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:48:43.757540    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:43.757551    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:43.763573    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:48:43.763581    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:48:43.777437    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:48:43.777452    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:48:46.293885    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:51.296655    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:51.296841    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:51.311707    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:48:51.311805    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:51.323196    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:48:51.323271    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:51.333907    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:48:51.333990    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:51.345112    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:48:51.345190    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:51.356045    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:48:51.356126    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:51.366635    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:48:51.366712    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:51.380020    4810 logs.go:282] 0 containers: []
	W1025 18:48:51.380037    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:51.380112    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:51.390491    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:48:51.390517    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:51.390522    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:48:51.394641    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:48:51.394648    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:48:51.408524    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:48:51.408534    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:48:51.426340    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:48:51.426351    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:48:51.441283    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:48:51.441297    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:48:51.452749    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:51.452760    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:51.477343    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:51.477356    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:51.506004    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:48:51.506012    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:48:51.519001    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:48:51.519014    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:48:51.541822    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:48:51.541833    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:48:51.553223    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:48:51.553233    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:48:51.564424    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:51.564435    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:51.603078    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:48:51.603094    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:51.615400    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:48:51.615414    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:48:51.629440    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:48:51.629452    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:48:51.646772    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:48:51.646783    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:48:51.664233    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:48:51.664243    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:48:54.180651    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:48:59.183172    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:48:59.183425    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:48:59.205547    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:48:59.205683    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:48:59.221593    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:48:59.221680    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:48:59.233666    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:48:59.233749    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:48:59.244791    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:48:59.244880    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:48:59.255290    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:48:59.255371    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:48:59.266035    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:48:59.266110    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:48:59.275887    4810 logs.go:282] 0 containers: []
	W1025 18:48:59.275900    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:48:59.275962    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:48:59.286566    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:48:59.286591    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:48:59.286597    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:48:59.305486    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:48:59.305497    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:48:59.319423    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:48:59.319434    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:48:59.330537    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:48:59.330550    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:48:59.359978    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:48:59.359989    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:48:59.376907    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:48:59.376917    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:48:59.388755    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:48:59.388766    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:48:59.406625    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:48:59.406638    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:48:59.425097    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:48:59.425109    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:48:59.439135    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:48:59.439149    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:48:59.459452    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:48:59.459465    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:48:59.487290    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:48:59.487305    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:48:59.505354    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:48:59.505364    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:48:59.517663    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:48:59.517675    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:48:59.541887    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:48:59.541895    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:48:59.554613    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:48:59.554622    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:48:59.590274    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:48:59.590286    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:02.096506    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:07.098905    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:07.099115    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:07.120133    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:49:07.120237    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:07.137531    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:49:07.137638    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:07.150931    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:49:07.151016    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:07.164509    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:49:07.164595    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:07.174782    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:49:07.174849    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:07.185740    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:49:07.185821    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:07.196596    4810 logs.go:282] 0 containers: []
	W1025 18:49:07.196607    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:07.196670    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:07.214442    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:49:07.214467    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:49:07.214473    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:49:07.249514    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:49:07.249532    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:49:07.265689    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:49:07.265701    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:07.278934    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:07.278945    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:07.313751    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:49:07.313764    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:49:07.329203    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:49:07.329214    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:49:07.341751    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:49:07.341764    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:49:07.353658    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:49:07.353672    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:49:07.371299    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:07.371310    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:07.400091    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:49:07.400100    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:49:07.414077    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:49:07.414087    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:49:07.429350    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:49:07.429363    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:49:07.441362    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:07.441374    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:07.465957    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:07.465964    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:07.470829    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:49:07.470838    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:49:07.484575    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:49:07.484585    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:49:07.496834    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:49:07.496843    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:49:10.022826    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:15.025259    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:15.025431    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:15.036398    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:49:15.036485    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:15.046660    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:49:15.046743    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:15.057296    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:49:15.057370    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:15.067788    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:49:15.067865    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:15.078297    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:49:15.078375    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:15.089493    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:49:15.089591    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:15.104234    4810 logs.go:282] 0 containers: []
	W1025 18:49:15.104246    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:15.104313    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:15.115502    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:49:15.115545    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:49:15.115553    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:49:15.127558    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:49:15.127569    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:49:15.138948    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:49:15.138958    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:49:15.153673    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:49:15.153688    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:49:15.170664    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:49:15.170673    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:49:15.182273    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:15.182283    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:15.212765    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:15.212775    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:15.251309    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:49:15.251320    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:49:15.265360    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:49:15.265371    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:49:15.280795    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:49:15.280807    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:49:15.308341    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:15.308355    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:15.312926    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:49:15.312933    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:49:15.326855    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:49:15.326870    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:49:15.345334    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:15.345349    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:15.372023    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:49:15.372030    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:15.383488    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:49:15.383502    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:49:15.395859    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:49:15.395871    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:49:17.908695    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:22.911441    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:22.911607    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:22.923861    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:49:22.923947    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:22.934763    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:49:22.934834    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:22.945313    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:49:22.945391    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:22.956040    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:49:22.956122    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:22.966538    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:49:22.966643    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:22.976950    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:49:22.977027    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:22.987041    4810 logs.go:282] 0 containers: []
	W1025 18:49:22.987053    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:22.987117    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:23.008462    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:49:23.008481    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:23.008486    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:23.041865    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:49:23.041876    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:49:23.056276    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:49:23.056286    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:49:23.070305    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:49:23.070316    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:49:23.088266    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:49:23.088276    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:49:23.099418    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:49:23.099431    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:49:23.116878    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:49:23.116889    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:49:23.128949    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:49:23.128961    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:49:23.147785    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:49:23.147799    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:49:23.162405    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:23.162418    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:23.167263    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:49:23.167269    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:49:23.193955    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:49:23.193966    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:49:23.215299    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:23.215313    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:23.240576    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:23.240584    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:23.275581    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:49:23.275592    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:49:23.287937    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:49:23.287952    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:49:23.305536    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:49:23.305545    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:25.819923    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:30.821588    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:30.821749    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:30.836966    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:49:30.837057    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:30.849485    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:49:30.849566    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:30.865135    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:49:30.865208    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:30.875590    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:49:30.875669    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:30.885785    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:49:30.885862    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:30.896001    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:49:30.896081    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:30.910551    4810 logs.go:282] 0 containers: []
	W1025 18:49:30.910565    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:30.910624    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:30.921684    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:49:30.921699    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:30.921704    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:30.945704    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:30.945712    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:30.984689    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:49:30.984705    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:49:31.000414    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:49:31.000425    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:49:31.013473    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:49:31.013484    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:31.026312    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:49:31.026327    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:49:31.039348    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:49:31.039364    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:49:31.057298    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:49:31.057309    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:49:31.072821    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:49:31.072833    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:49:31.084237    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:49:31.084250    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:49:31.101515    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:49:31.101524    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:49:31.123762    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:49:31.123775    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:49:31.135302    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:31.135313    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:31.164777    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:31.164786    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:31.169001    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:49:31.169009    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:49:31.182958    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:49:31.182970    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:49:31.197012    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:49:31.197023    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:49:33.721539    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:38.724008    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:38.724194    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:38.737315    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:49:38.737393    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:38.747803    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:49:38.747880    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:38.758425    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:49:38.758508    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:38.769016    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:49:38.769111    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:38.779266    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:49:38.779344    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:38.790172    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:49:38.790250    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:38.800847    4810 logs.go:282] 0 containers: []
	W1025 18:49:38.800860    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:38.800937    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:38.811532    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:49:38.811549    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:38.811554    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:38.835173    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:38.835183    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:38.870411    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:49:38.870422    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:49:38.884837    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:49:38.884849    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:49:38.897282    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:49:38.897297    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:49:38.912360    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:49:38.912371    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:49:38.928634    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:49:38.928647    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:49:38.946604    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:49:38.946614    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:49:38.958197    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:49:38.958208    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:49:38.971426    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:49:38.971438    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:49:38.990145    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:49:38.990156    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:39.002583    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:39.002596    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:39.033294    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:49:39.033304    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:49:39.056439    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:49:39.056449    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:49:39.067621    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:49:39.067632    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:49:39.079285    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:39.079296    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:39.083494    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:49:39.083500    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:49:41.599674    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:46.602095    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:46.602282    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:46.621698    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:49:46.621791    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:46.635444    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:49:46.635521    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:46.647124    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:49:46.647208    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:46.658520    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:49:46.658602    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:46.669957    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:49:46.670039    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:46.681093    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:49:46.681160    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:46.691510    4810 logs.go:282] 0 containers: []
	W1025 18:49:46.691521    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:46.691583    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:46.702568    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:49:46.702584    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:49:46.702589    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:49:46.721029    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:49:46.721042    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:49:46.738294    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:49:46.738305    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:49:46.752579    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:49:46.752592    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:49:46.767819    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:46.767833    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:46.791285    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:49:46.791293    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:49:46.806687    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:46.806696    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:46.837549    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:46.837557    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:46.875267    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:49:46.875279    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:49:46.888953    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:49:46.888963    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:49:46.913639    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:49:46.913648    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:49:46.931018    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:49:46.931033    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:46.943594    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:46.943605    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:46.948112    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:49:46.948119    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:49:46.964065    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:49:46.964075    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:49:46.975409    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:49:46.975421    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:49:47.001863    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:49:47.001874    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:49:49.513484    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:49:54.515952    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:49:54.516200    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:49:54.536306    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:49:54.536451    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:49:54.550381    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:49:54.550469    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:49:54.562937    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:49:54.563053    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:49:54.573832    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:49:54.573905    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:49:54.584108    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:49:54.584175    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:49:54.594375    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:49:54.594446    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:49:54.604416    4810 logs.go:282] 0 containers: []
	W1025 18:49:54.604426    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:49:54.604486    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:49:54.615600    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:49:54.615614    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:49:54.615620    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:49:54.651449    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:49:54.651459    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:49:54.673439    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:49:54.673449    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:49:54.690853    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:49:54.690868    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:49:54.708329    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:49:54.708339    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:49:54.720180    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:49:54.720193    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:49:54.732003    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:49:54.732016    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:49:54.744792    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:49:54.744808    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:49:54.769499    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:49:54.769517    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:49:54.784526    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:49:54.784537    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:49:54.796822    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:49:54.796837    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:49:54.828300    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:49:54.828309    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:49:54.832547    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:49:54.832553    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:49:54.846597    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:49:54.846612    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:49:54.868891    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:49:54.868905    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:49:54.880368    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:49:54.880379    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:49:54.903301    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:49:54.903309    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:49:57.418534    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:02.421078    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:02.421272    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:02.442220    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:50:02.442318    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:02.455798    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:50:02.455885    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:02.468027    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:50:02.468099    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:02.479987    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:50:02.480059    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:02.490651    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:50:02.490727    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:02.501218    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:50:02.501290    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:02.511683    4810 logs.go:282] 0 containers: []
	W1025 18:50:02.511697    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:02.511760    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:02.522490    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:50:02.522509    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:02.522514    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:02.552929    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:50:02.552943    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:50:02.568161    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:50:02.568177    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:50:02.581364    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:50:02.581382    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:50:02.604153    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:50:02.604163    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:50:02.619010    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:02.619020    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:02.653174    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:50:02.653189    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:50:02.667153    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:50:02.667163    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:50:02.693433    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:50:02.693444    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:50:02.710825    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:02.710835    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:02.736140    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:50:02.736153    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:02.748000    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:02.748015    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:02.752309    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:50:02.752316    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:50:02.764517    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:50:02.764527    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:50:02.776045    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:50:02.776056    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:50:02.787729    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:50:02.787740    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:50:02.802742    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:50:02.802756    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:50:05.326182    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:10.328567    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:10.328879    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:10.381632    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:50:10.381728    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:10.399256    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:50:10.399341    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:10.414080    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:50:10.414161    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:10.425175    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:50:10.425261    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:10.435740    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:50:10.435813    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:10.446033    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:50:10.446112    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:10.458287    4810 logs.go:282] 0 containers: []
	W1025 18:50:10.458298    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:10.458363    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:10.468884    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:50:10.468909    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:10.468918    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:10.505157    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:50:10.505168    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:50:10.519497    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:50:10.519511    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:50:10.533233    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:50:10.533243    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:50:10.547680    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:50:10.547694    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:50:10.564685    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:50:10.564697    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:10.580413    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:10.580424    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:10.602953    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:10.602961    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:10.607308    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:50:10.607314    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:50:10.619135    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:50:10.619148    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:50:10.642245    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:50:10.642259    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:50:10.654501    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:50:10.654512    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:50:10.671301    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:50:10.671311    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:50:10.682486    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:50:10.682496    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:50:10.699692    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:50:10.699702    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:50:10.711752    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:10.711766    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:10.740351    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:50:10.740362    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:50:13.255112    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:18.257676    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:18.258143    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:18.288992    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:50:18.289137    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:18.308438    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:50:18.308551    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:18.322108    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:50:18.322197    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:18.334360    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:50:18.334436    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:18.350320    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:50:18.350401    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:18.361492    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:50:18.361576    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:18.374134    4810 logs.go:282] 0 containers: []
	W1025 18:50:18.374147    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:18.374208    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:18.385078    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:50:18.385096    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:50:18.385101    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:50:18.397743    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:50:18.397756    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:50:18.408875    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:50:18.408886    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:50:18.422510    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:18.422522    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:18.458731    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:50:18.458746    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:50:18.473949    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:50:18.473964    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:50:18.497235    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:50:18.497245    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:50:18.513375    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:50:18.513385    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:50:18.526429    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:18.526441    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:18.531142    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:50:18.531148    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:50:18.546081    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:50:18.546095    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:50:18.563435    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:50:18.563448    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:50:18.575558    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:18.575572    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:18.598536    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:50:18.598543    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:18.610751    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:18.610767    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:18.640304    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:50:18.640312    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:50:18.657736    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:50:18.657749    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:50:21.172456    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:26.175050    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:26.175536    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:26.211506    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:50:26.211677    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:26.232667    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:50:26.232785    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:26.247522    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:50:26.247606    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:26.259952    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:50:26.260042    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:26.270548    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:50:26.270629    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:26.281083    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:50:26.281159    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:26.290738    4810 logs.go:282] 0 containers: []
	W1025 18:50:26.290753    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:26.290822    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:26.301508    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:50:26.301525    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:50:26.301531    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:50:26.316733    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:50:26.316743    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:50:26.334447    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:50:26.334461    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:50:26.345655    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:50:26.345666    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:50:26.358530    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:50:26.358543    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:50:26.369841    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:50:26.369854    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:50:26.383773    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:50:26.383787    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:50:26.398225    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:50:26.398235    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:50:26.418324    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:26.418335    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:26.442047    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:50:26.442065    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:50:26.453616    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:50:26.453627    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:50:26.477302    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:50:26.477322    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:50:26.489090    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:50:26.489101    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:26.501092    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:26.501107    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:26.531378    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:26.531387    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:26.535803    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:26.535811    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:26.571788    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:50:26.571800    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:50:29.091190    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:34.093703    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:34.093894    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:34.107114    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:50:34.107189    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:34.121554    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:50:34.121635    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:34.132450    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:50:34.132530    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:34.143424    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:50:34.143506    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:34.153959    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:50:34.154040    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:34.164290    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:50:34.164372    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:34.174961    4810 logs.go:282] 0 containers: []
	W1025 18:50:34.174973    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:34.175040    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:34.185553    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:50:34.185569    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:34.185575    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:34.189637    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:50:34.189645    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:50:34.211667    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:50:34.211677    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:50:34.223438    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:50:34.223449    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:50:34.240297    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:50:34.240307    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:50:34.251431    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:34.251442    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:34.276353    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:34.276361    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:34.310470    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:50:34.310482    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:50:34.325804    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:50:34.325815    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:50:34.348836    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:50:34.348845    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:34.360434    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:34.360447    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:34.388314    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:50:34.388322    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:50:34.405612    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:50:34.405626    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:50:34.420068    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:50:34.420078    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:50:34.437521    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:50:34.437533    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:50:34.449436    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:50:34.449449    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:50:34.463277    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:50:34.463287    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:50:36.981426    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:41.982025    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:41.982357    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:42.008354    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:50:42.008486    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:42.025958    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:50:42.026054    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:42.039766    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:50:42.039845    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:42.051326    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:50:42.051399    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:42.062282    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:50:42.062368    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:42.072686    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:50:42.072758    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:42.083026    4810 logs.go:282] 0 containers: []
	W1025 18:50:42.083043    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:42.083111    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:42.093806    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:50:42.093823    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:50:42.093828    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:42.106066    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:42.106080    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:42.135738    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:42.135749    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:42.140027    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:50:42.140032    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:50:42.156812    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:50:42.156828    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:50:42.174524    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:50:42.174539    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:50:42.185528    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:42.185539    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:42.220142    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:50:42.220159    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:50:42.235272    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:50:42.235286    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:50:42.249325    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:50:42.249334    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:50:42.263816    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:50:42.263826    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:50:42.281503    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:50:42.281517    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:50:42.298702    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:42.298712    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:42.323572    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:50:42.323580    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:50:42.348811    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:50:42.348823    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:50:42.365509    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:50:42.365520    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:50:42.385167    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:50:42.385176    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:50:44.899082    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:49.901601    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:49.901771    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:50:49.925784    4810 logs.go:282] 2 containers: [c10fbae4462d 2ff1d3a87493]
	I1025 18:50:49.925873    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:50:49.938651    4810 logs.go:282] 2 containers: [c827d7d3ae5c 11b566bdf60e]
	I1025 18:50:49.938729    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:50:49.953391    4810 logs.go:282] 1 containers: [6f1cb4aa886d]
	I1025 18:50:49.953467    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:50:49.963722    4810 logs.go:282] 2 containers: [8078ecd12095 451202c4a948]
	I1025 18:50:49.963800    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:50:49.973827    4810 logs.go:282] 1 containers: [415b5bfac7cf]
	I1025 18:50:49.973892    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:50:49.984192    4810 logs.go:282] 2 containers: [72f7b2a40995 c1f0da53b12b]
	I1025 18:50:49.984268    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:50:49.994411    4810 logs.go:282] 0 containers: []
	W1025 18:50:49.994423    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:50:49.994484    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:50:50.004950    4810 logs.go:282] 2 containers: [2c3dc420f531 d67f7969a5df]
	I1025 18:50:50.004967    4810 logs.go:123] Gathering logs for kube-scheduler [8078ecd12095] ...
	I1025 18:50:50.004973    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8078ecd12095"
	I1025 18:50:50.028229    4810 logs.go:123] Gathering logs for kube-scheduler [451202c4a948] ...
	I1025 18:50:50.028246    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 451202c4a948"
	I1025 18:50:50.045989    4810 logs.go:123] Gathering logs for kube-controller-manager [72f7b2a40995] ...
	I1025 18:50:50.045999    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 72f7b2a40995"
	I1025 18:50:50.066418    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:50:50.066428    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:50:50.071032    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:50:50.071039    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:50:50.106083    4810 logs.go:123] Gathering logs for kube-apiserver [c10fbae4462d] ...
	I1025 18:50:50.106097    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c10fbae4462d"
	I1025 18:50:50.121561    4810 logs.go:123] Gathering logs for etcd [c827d7d3ae5c] ...
	I1025 18:50:50.121575    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c827d7d3ae5c"
	I1025 18:50:50.135031    4810 logs.go:123] Gathering logs for coredns [6f1cb4aa886d] ...
	I1025 18:50:50.135048    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f1cb4aa886d"
	I1025 18:50:50.147389    4810 logs.go:123] Gathering logs for storage-provisioner [d67f7969a5df] ...
	I1025 18:50:50.147402    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d67f7969a5df"
	I1025 18:50:50.159124    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:50:50.159136    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:50:50.189906    4810 logs.go:123] Gathering logs for kube-controller-manager [c1f0da53b12b] ...
	I1025 18:50:50.189921    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1f0da53b12b"
	I1025 18:50:50.208184    4810 logs.go:123] Gathering logs for storage-provisioner [2c3dc420f531] ...
	I1025 18:50:50.208193    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c3dc420f531"
	I1025 18:50:50.220885    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:50:50.220894    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:50:50.245755    4810 logs.go:123] Gathering logs for kube-apiserver [2ff1d3a87493] ...
	I1025 18:50:50.245767    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ff1d3a87493"
	I1025 18:50:50.259838    4810 logs.go:123] Gathering logs for etcd [11b566bdf60e] ...
	I1025 18:50:50.259851    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11b566bdf60e"
	I1025 18:50:50.275518    4810 logs.go:123] Gathering logs for kube-proxy [415b5bfac7cf] ...
	I1025 18:50:50.275531    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 415b5bfac7cf"
	I1025 18:50:50.288642    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:50:50.288655    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:50:52.803439    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:50:57.806109    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:50:57.806249    4810 kubeadm.go:597] duration metric: took 4m3.500441584s to restartPrimaryControlPlane
	W1025 18:50:57.806379    4810 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1025 18:50:57.806430    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1025 18:50:58.870417    4810 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.063948584s)
	I1025 18:50:58.870489    4810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 18:50:58.875171    4810 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 18:50:58.878566    4810 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 18:50:58.881701    4810 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 18:50:58.881708    4810 kubeadm.go:157] found existing configuration files:
	
	I1025 18:50:58.881743    4810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/admin.conf
	I1025 18:50:58.884462    4810 kubeadm.go:163] "https://control-plane.minikube.internal:62543" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 18:50:58.884492    4810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 18:50:58.887100    4810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/kubelet.conf
	I1025 18:50:58.890135    4810 kubeadm.go:163] "https://control-plane.minikube.internal:62543" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 18:50:58.890162    4810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 18:50:58.892830    4810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/controller-manager.conf
	I1025 18:50:58.895190    4810 kubeadm.go:163] "https://control-plane.minikube.internal:62543" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 18:50:58.895214    4810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 18:50:58.898198    4810 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/scheduler.conf
	I1025 18:50:58.901074    4810 kubeadm.go:163] "https://control-plane.minikube.internal:62543" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:62543 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 18:50:58.901105    4810 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 18:50:58.903724    4810 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1025 18:50:58.919921    4810 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1025 18:50:58.920032    4810 kubeadm.go:310] [preflight] Running pre-flight checks
	I1025 18:50:58.972154    4810 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 18:50:58.972209    4810 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 18:50:58.972265    4810 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 18:50:59.020674    4810 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 18:50:59.024872    4810 out.go:235]   - Generating certificates and keys ...
	I1025 18:50:59.024909    4810 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1025 18:50:59.024941    4810 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1025 18:50:59.024982    4810 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1025 18:50:59.025014    4810 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1025 18:50:59.025049    4810 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1025 18:50:59.025082    4810 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1025 18:50:59.025114    4810 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1025 18:50:59.025153    4810 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1025 18:50:59.025191    4810 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1025 18:50:59.025228    4810 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1025 18:50:59.025245    4810 kubeadm.go:310] [certs] Using the existing "sa" key
	I1025 18:50:59.025274    4810 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 18:50:59.087096    4810 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 18:50:59.206299    4810 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 18:50:59.268475    4810 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 18:50:59.352682    4810 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 18:50:59.384287    4810 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 18:50:59.384679    4810 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 18:50:59.384776    4810 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1025 18:50:59.466938    4810 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 18:50:59.471104    4810 out.go:235]   - Booting up control plane ...
	I1025 18:50:59.471147    4810 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 18:50:59.471191    4810 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 18:50:59.471230    4810 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 18:50:59.471276    4810 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 18:50:59.471430    4810 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 18:51:03.974804    4810 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503011 seconds
	I1025 18:51:03.974868    4810 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 18:51:03.978629    4810 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 18:51:04.497698    4810 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 18:51:04.498197    4810 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-473000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 18:51:05.002531    4810 kubeadm.go:310] [bootstrap-token] Using token: c28pqe.2cav7zn00sxzo3a6
	I1025 18:51:05.005691    4810 out.go:235]   - Configuring RBAC rules ...
	I1025 18:51:05.005752    4810 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 18:51:05.005793    4810 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 18:51:05.008084    4810 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 18:51:05.009118    4810 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 18:51:05.010155    4810 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 18:51:05.011057    4810 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 18:51:05.014165    4810 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 18:51:05.162972    4810 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1025 18:51:05.406701    4810 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1025 18:51:05.407157    4810 kubeadm.go:310] 
	I1025 18:51:05.407196    4810 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1025 18:51:05.407201    4810 kubeadm.go:310] 
	I1025 18:51:05.407243    4810 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1025 18:51:05.407246    4810 kubeadm.go:310] 
	I1025 18:51:05.407258    4810 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1025 18:51:05.407283    4810 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 18:51:05.407305    4810 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 18:51:05.407309    4810 kubeadm.go:310] 
	I1025 18:51:05.407346    4810 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1025 18:51:05.407352    4810 kubeadm.go:310] 
	I1025 18:51:05.407377    4810 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 18:51:05.407381    4810 kubeadm.go:310] 
	I1025 18:51:05.407413    4810 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1025 18:51:05.407452    4810 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 18:51:05.407509    4810 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 18:51:05.407512    4810 kubeadm.go:310] 
	I1025 18:51:05.407555    4810 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 18:51:05.407613    4810 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1025 18:51:05.407617    4810 kubeadm.go:310] 
	I1025 18:51:05.407665    4810 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token c28pqe.2cav7zn00sxzo3a6 \
	I1025 18:51:05.407713    4810 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b9d1b51b46aa29bee5add6dcd2f2839d068831832311340de43d2611a1555cef \
	I1025 18:51:05.407730    4810 kubeadm.go:310] 	--control-plane 
	I1025 18:51:05.407733    4810 kubeadm.go:310] 
	I1025 18:51:05.407773    4810 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1025 18:51:05.407776    4810 kubeadm.go:310] 
	I1025 18:51:05.407819    4810 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token c28pqe.2cav7zn00sxzo3a6 \
	I1025 18:51:05.407901    4810 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:b9d1b51b46aa29bee5add6dcd2f2839d068831832311340de43d2611a1555cef 
	I1025 18:51:05.407978    4810 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 18:51:05.408035    4810 cni.go:84] Creating CNI manager for ""
	I1025 18:51:05.408046    4810 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:51:05.415414    4810 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1025 18:51:05.419496    4810 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1025 18:51:05.422589    4810 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1025 18:51:05.427655    4810 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 18:51:05.427705    4810 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 18:51:05.427732    4810 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-473000 minikube.k8s.io/updated_at=2024_10_25T18_51_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=1152482f6f7d36cd6003386ded304100fbcb5064 minikube.k8s.io/name=stopped-upgrade-473000 minikube.k8s.io/primary=true
	I1025 18:51:05.469049    4810 ops.go:34] apiserver oom_adj: -16
	I1025 18:51:05.469045    4810 kubeadm.go:1113] duration metric: took 41.38025ms to wait for elevateKubeSystemPrivileges
	I1025 18:51:05.469065    4810 kubeadm.go:394] duration metric: took 4m11.1758825s to StartCluster
	I1025 18:51:05.469076    4810 settings.go:142] acquiring lock: {Name:mk3ff32802ddfc6c1e0425afbf853ac78c436759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:51:05.469178    4810 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:51:05.469602    4810 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/kubeconfig: {Name:mk88d1ac601cc80b64027f8557b82969027e8e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:51:05.469812    4810 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:51:05.469818    4810 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 18:51:05.469863    4810 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-473000"
	I1025 18:51:05.469871    4810 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-473000"
	W1025 18:51:05.469876    4810 addons.go:243] addon storage-provisioner should already be in state true
	I1025 18:51:05.469887    4810 host.go:66] Checking if "stopped-upgrade-473000" exists ...
	I1025 18:51:05.469902    4810 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-473000"
	I1025 18:51:05.469909    4810 config.go:182] Loaded profile config "stopped-upgrade-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:51:05.469910    4810 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-473000"
	I1025 18:51:05.474411    4810 out.go:177] * Verifying Kubernetes components...
	I1025 18:51:05.475038    4810 kapi.go:59] client config for stopped-upgrade-473000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/stopped-upgrade-473000/client.key", CAFile:"/Users/jenkins/minikube-integration/19868-1112/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104e52680), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 18:51:05.478775    4810 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-473000"
	W1025 18:51:05.478780    4810 addons.go:243] addon default-storageclass should already be in state true
	I1025 18:51:05.478787    4810 host.go:66] Checking if "stopped-upgrade-473000" exists ...
	I1025 18:51:05.479306    4810 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 18:51:05.479312    4810 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 18:51:05.479318    4810 sshutil.go:53] new ssh client: &{IP:localhost Port:62508 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/id_rsa Username:docker}
	I1025 18:51:05.482244    4810 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 18:51:05.486440    4810 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 18:51:05.490438    4810 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:51:05.490445    4810 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 18:51:05.490451    4810 sshutil.go:53] new ssh client: &{IP:localhost Port:62508 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/stopped-upgrade-473000/id_rsa Username:docker}
	I1025 18:51:05.578124    4810 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 18:51:05.586221    4810 api_server.go:52] waiting for apiserver process to appear ...
	I1025 18:51:05.586292    4810 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 18:51:05.590246    4810 api_server.go:72] duration metric: took 120.418916ms to wait for apiserver process to appear ...
	I1025 18:51:05.590257    4810 api_server.go:88] waiting for apiserver healthz status ...
	I1025 18:51:05.590264    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:05.603346    4810 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 18:51:05.659440    4810 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 18:51:05.971309    4810 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 18:51:05.971323    4810 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 18:51:10.592538    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:10.592586    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:15.592984    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:15.593005    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:20.593831    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:20.593871    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:25.594605    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:25.594669    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:30.595494    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:30.595517    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:35.596046    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:35.596118    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1025 18:51:35.974291    4810 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1025 18:51:35.978220    4810 out.go:177] * Enabled addons: storage-provisioner
	I1025 18:51:35.988166    4810 addons.go:510] duration metric: took 30.5176355s for enable addons: enabled=[storage-provisioner]
	I1025 18:51:40.597320    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:40.597369    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:45.598933    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:45.599008    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:50.600903    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:50.600950    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:51:55.603236    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:51:55.603268    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:52:00.605560    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:52:00.605585    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:52:05.607883    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:52:05.607995    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:52:05.619440    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:52:05.619522    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:52:05.629973    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:52:05.630052    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:52:05.640840    4810 logs.go:282] 2 containers: [0793dc6a8b3c faf5a5d400ff]
	I1025 18:52:05.640918    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:52:05.651130    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:52:05.651206    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:52:05.661376    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:52:05.661454    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:52:05.671988    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:52:05.672066    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:52:05.681552    4810 logs.go:282] 0 containers: []
	W1025 18:52:05.681564    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:52:05.681626    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:52:05.691818    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:52:05.691834    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:52:05.691840    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:52:05.705944    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:52:05.705956    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:52:05.721541    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:52:05.721555    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:52:05.737507    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:52:05.737519    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:52:05.755133    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:52:05.755144    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:52:05.766458    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:52:05.766468    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:52:05.790205    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:52:05.790216    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:52:05.819942    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:52:05.819949    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:52:05.854679    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:52:05.854693    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:52:05.866150    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:52:05.866160    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:52:05.878153    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:52:05.878167    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:52:05.889756    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:52:05.889767    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:52:05.894007    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:52:05.894014    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:52:08.410017    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:52:13.412674    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:52:13.412922    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:52:13.432365    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:52:13.432459    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:52:13.444143    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:52:13.444224    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:52:13.455904    4810 logs.go:282] 2 containers: [0793dc6a8b3c faf5a5d400ff]
	I1025 18:52:13.455988    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:52:13.466300    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:52:13.466379    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:52:13.476575    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:52:13.476649    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:52:13.486881    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:52:13.486961    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:52:13.496652    4810 logs.go:282] 0 containers: []
	W1025 18:52:13.496664    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:52:13.496730    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:52:13.506614    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:52:13.506626    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:52:13.506630    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:52:13.531876    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:52:13.531883    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:52:13.546107    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:52:13.546116    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:52:13.582814    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:52:13.582826    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:52:13.598976    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:52:13.598990    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:52:13.610942    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:52:13.610955    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:52:13.624343    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:52:13.624356    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:52:13.644667    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:52:13.644678    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:52:13.656128    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:52:13.656140    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:52:13.686287    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:52:13.686296    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:52:13.690828    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:52:13.690836    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:52:13.704525    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:52:13.704538    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:52:13.716134    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:52:13.716145    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:52:16.232699    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:52:21.235531    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:52:21.235998    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:52:21.268928    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:52:21.269063    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:52:21.288850    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:52:21.288961    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:52:21.303187    4810 logs.go:282] 2 containers: [0793dc6a8b3c faf5a5d400ff]
	I1025 18:52:21.303264    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:52:21.314732    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:52:21.314808    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:52:21.325087    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:52:21.325164    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:52:21.335165    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:52:21.335243    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:52:21.345211    4810 logs.go:282] 0 containers: []
	W1025 18:52:21.345220    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:52:21.345282    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:52:21.355788    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:52:21.355802    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:52:21.355808    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:52:21.368129    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:52:21.368140    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:52:21.380067    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:52:21.380083    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:52:21.413962    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:52:21.413976    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:52:21.418355    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:52:21.418363    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:52:21.432215    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:52:21.432228    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:52:21.448219    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:52:21.448228    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:52:21.460447    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:52:21.460460    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:52:21.476148    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:52:21.476158    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:52:21.488298    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:52:21.488316    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:52:21.506456    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:52:21.506467    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:52:21.537745    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:52:21.537753    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:52:21.548937    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:52:21.548948    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:52:24.075160    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:52:29.078062    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:52:29.078246    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:52:29.094093    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:52:29.094194    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:52:29.108302    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:52:29.108375    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:52:29.119056    4810 logs.go:282] 2 containers: [0793dc6a8b3c faf5a5d400ff]
	I1025 18:52:29.119133    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:52:29.133228    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:52:29.133301    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:52:29.143789    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:52:29.143863    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:52:29.155191    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:52:29.155264    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:52:29.165490    4810 logs.go:282] 0 containers: []
	W1025 18:52:29.165505    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:52:29.165573    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:52:29.175953    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:52:29.175970    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:52:29.175975    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:52:29.188294    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:52:29.188307    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:52:29.199808    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:52:29.199822    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:52:29.204616    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:52:29.204625    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:52:29.243744    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:52:29.243759    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:52:29.259720    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:52:29.259730    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:52:29.281091    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:52:29.281104    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:52:29.292676    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:52:29.292687    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:52:29.304098    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:52:29.304109    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:52:29.335594    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:52:29.335601    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:52:29.346918    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:52:29.346927    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:52:29.360925    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:52:29.360935    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:52:29.382082    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:52:29.382091    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:52:31.908859    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:52:36.911357    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:52:36.911634    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:52:36.935570    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:52:36.935690    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:52:36.950791    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:52:36.950878    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:52:36.964277    4810 logs.go:282] 2 containers: [0793dc6a8b3c faf5a5d400ff]
	I1025 18:52:36.964352    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:52:36.976770    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:52:36.976838    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:52:36.987585    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:52:36.987659    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:52:36.998093    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:52:36.998166    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:52:37.014361    4810 logs.go:282] 0 containers: []
	W1025 18:52:37.014372    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:52:37.014432    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:52:37.024466    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:52:37.024484    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:52:37.024489    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:52:37.045375    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:52:37.045389    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:52:37.057233    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:52:37.057243    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:52:37.069027    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:52:37.069036    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:52:37.083434    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:52:37.083446    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:52:37.101075    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:52:37.101085    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:52:37.132537    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:52:37.132546    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:52:37.188123    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:52:37.188135    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:52:37.208670    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:52:37.208683    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:52:37.220000    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:52:37.220015    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:52:37.234114    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:52:37.234128    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:52:37.238541    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:52:37.238549    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:52:37.250537    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:52:37.250550    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:52:39.776436    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:52:44.779511    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:52:44.780021    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:52:44.820187    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:52:44.820352    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:52:44.848136    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:52:44.848241    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:52:44.862584    4810 logs.go:282] 2 containers: [0793dc6a8b3c faf5a5d400ff]
	I1025 18:52:44.862667    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:52:44.874835    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:52:44.874918    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:52:44.886141    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:52:44.886223    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:52:44.897258    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:52:44.897333    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:52:44.907321    4810 logs.go:282] 0 containers: []
	W1025 18:52:44.907333    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:52:44.907400    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:52:44.918124    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:52:44.918141    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:52:44.918146    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:52:44.947969    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:52:44.947980    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:52:44.963349    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:52:44.963361    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:52:44.987100    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:52:44.987112    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:52:45.010662    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:52:45.010670    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:52:45.030408    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:52:45.030418    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:52:45.042256    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:52:45.042268    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:52:45.046477    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:52:45.046484    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:52:45.079462    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:52:45.079476    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:52:45.094032    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:52:45.094042    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:52:45.107181    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:52:45.107191    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:52:45.121970    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:52:45.121980    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:52:45.133338    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:52:45.133350    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:52:47.646088    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:52:52.648772    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:52:52.649283    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:52:52.689106    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:52:52.689251    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:52:52.710370    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:52:52.710497    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:52:52.725252    4810 logs.go:282] 2 containers: [0793dc6a8b3c faf5a5d400ff]
	I1025 18:52:52.725333    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:52:52.737971    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:52:52.738042    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:52:52.748935    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:52:52.749002    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:52:52.759618    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:52:52.759696    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:52:52.769779    4810 logs.go:282] 0 containers: []
	W1025 18:52:52.769790    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:52:52.769847    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:52:52.780041    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:52:52.780055    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:52:52.780061    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:52:52.796618    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:52:52.796629    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:52:52.827866    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:52:52.827875    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:52:52.832521    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:52:52.832530    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:52:52.871478    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:52:52.871492    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:52:52.885812    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:52:52.885825    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:52:52.898144    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:52:52.898156    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:52:52.922875    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:52:52.922884    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:52:52.934304    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:52:52.934317    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:52:52.948171    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:52:52.948181    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:52:52.959572    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:52:52.959582    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:52:52.974529    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:52:52.974542    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:52:52.987823    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:52:52.987834    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:52:55.508036    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:53:00.510647    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:53:00.510988    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:53:00.537571    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:53:00.537699    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:53:00.555799    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:53:00.555889    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:53:00.568865    4810 logs.go:282] 2 containers: [0793dc6a8b3c faf5a5d400ff]
	I1025 18:53:00.568941    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:53:00.580666    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:53:00.580740    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:53:00.591159    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:53:00.591234    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:53:00.601934    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:53:00.602008    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:53:00.611921    4810 logs.go:282] 0 containers: []
	W1025 18:53:00.611931    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:53:00.611988    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:53:00.622549    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:53:00.622564    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:53:00.622570    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:53:00.634572    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:53:00.634586    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:53:00.646474    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:53:00.646483    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:53:00.657455    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:53:00.657470    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:53:00.668964    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:53:00.668975    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:53:00.705818    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:53:00.705830    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:53:00.720332    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:53:00.720343    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:53:00.734434    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:53:00.734446    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:53:00.748395    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:53:00.748408    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:53:00.769537    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:53:00.769550    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:53:00.794466    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:53:00.794473    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:53:00.824896    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:53:00.824902    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:53:00.829420    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:53:00.829428    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:53:03.343180    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:53:08.346065    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:53:08.346608    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:53:08.387740    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:53:08.387892    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:53:08.409502    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:53:08.409629    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:53:08.425441    4810 logs.go:282] 2 containers: [0793dc6a8b3c faf5a5d400ff]
	I1025 18:53:08.425524    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:53:08.438253    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:53:08.438332    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:53:08.449452    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:53:08.449521    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:53:08.460454    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:53:08.460535    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:53:08.471599    4810 logs.go:282] 0 containers: []
	W1025 18:53:08.471611    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:53:08.471678    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:53:08.482901    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:53:08.482919    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:53:08.482924    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:53:08.512376    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:53:08.512384    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:53:08.516856    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:53:08.516867    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:53:08.531332    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:53:08.531344    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:53:08.543232    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:53:08.543244    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:53:08.560941    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:53:08.560951    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:53:08.572237    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:53:08.572252    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:53:08.643279    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:53:08.643293    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:53:08.657468    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:53:08.657481    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:53:08.669287    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:53:08.669301    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:53:08.684341    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:53:08.684354    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:53:08.695632    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:53:08.695644    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:53:08.707382    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:53:08.707394    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:53:11.233142    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:53:16.236030    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:53:16.236557    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:53:16.271365    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:53:16.271510    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:53:16.291899    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:53:16.292011    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:53:16.307159    4810 logs.go:282] 2 containers: [0793dc6a8b3c faf5a5d400ff]
	I1025 18:53:16.307251    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:53:16.318909    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:53:16.318990    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:53:16.329397    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:53:16.329473    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:53:16.339753    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:53:16.339828    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:53:16.349858    4810 logs.go:282] 0 containers: []
	W1025 18:53:16.349870    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:53:16.349939    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:53:16.360583    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:53:16.360599    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:53:16.360605    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:53:16.365325    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:53:16.365334    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:53:16.402144    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:53:16.402157    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:53:16.413966    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:53:16.413980    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:53:16.428484    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:53:16.428497    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:53:16.458865    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:53:16.458872    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:53:16.477578    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:53:16.477590    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:53:16.495675    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:53:16.495689    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:53:16.507472    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:53:16.507484    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:53:16.518834    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:53:16.518844    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:53:16.537220    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:53:16.537232    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:53:16.552917    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:53:16.552930    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:53:16.576460    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:53:16.576469    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:53:19.089869    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:53:24.092187    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:53:24.092455    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:53:24.114726    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:53:24.114830    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:53:24.129256    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:53:24.129337    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:53:24.142207    4810 logs.go:282] 4 containers: [2185f5e6d22f 8f326d783203 0793dc6a8b3c faf5a5d400ff]
	I1025 18:53:24.142286    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:53:24.153063    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:53:24.153137    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:53:24.166390    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:53:24.166459    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:53:24.176385    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:53:24.176450    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:53:24.186313    4810 logs.go:282] 0 containers: []
	W1025 18:53:24.186324    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:53:24.186381    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:53:24.197846    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:53:24.197872    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:53:24.197877    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:53:24.209905    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:53:24.209915    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:53:24.230876    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:53:24.230887    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:53:24.235328    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:53:24.235336    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:53:24.246696    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:53:24.246709    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:53:24.261171    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:53:24.261184    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:53:24.272596    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:53:24.272606    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:53:24.303262    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:53:24.303272    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:53:24.317983    4810 logs.go:123] Gathering logs for coredns [2185f5e6d22f] ...
	I1025 18:53:24.317996    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2185f5e6d22f"
	I1025 18:53:24.329068    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:53:24.329081    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:53:24.364936    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:53:24.364950    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:53:24.376231    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:53:24.376246    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:53:24.388067    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:53:24.388077    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:53:24.405943    4810 logs.go:123] Gathering logs for coredns [8f326d783203] ...
	I1025 18:53:24.405956    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f326d783203"
	I1025 18:53:24.417624    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:53:24.417636    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:53:26.943295    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:53:31.945612    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:53:31.945789    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:53:31.965607    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:53:31.965703    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:53:31.980704    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:53:31.980777    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:53:31.992236    4810 logs.go:282] 4 containers: [2185f5e6d22f 8f326d783203 0793dc6a8b3c faf5a5d400ff]
	I1025 18:53:31.992316    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:53:32.002751    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:53:32.002825    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:53:32.013227    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:53:32.013297    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:53:32.026659    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:53:32.026728    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:53:32.036652    4810 logs.go:282] 0 containers: []
	W1025 18:53:32.036671    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:53:32.036727    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:53:32.047139    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:53:32.047159    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:53:32.047165    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:53:32.063910    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:53:32.063925    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:53:32.080949    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:53:32.080963    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:53:32.085481    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:53:32.085488    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:53:32.103208    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:53:32.103221    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:53:32.118550    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:53:32.118564    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:53:32.132573    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:53:32.132586    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:53:32.144057    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:53:32.144070    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:53:32.156759    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:53:32.156771    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:53:32.191487    4810 logs.go:123] Gathering logs for coredns [2185f5e6d22f] ...
	I1025 18:53:32.191502    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2185f5e6d22f"
	I1025 18:53:32.210546    4810 logs.go:123] Gathering logs for coredns [8f326d783203] ...
	I1025 18:53:32.210559    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f326d783203"
	I1025 18:53:32.221622    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:53:32.221634    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:53:32.236084    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:53:32.236094    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:53:32.248682    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:53:32.248696    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:53:32.273846    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:53:32.273853    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:53:34.806893    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:53:39.809763    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:53:39.810339    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:53:39.850353    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:53:39.850502    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:53:39.873828    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:53:39.873952    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:53:39.890762    4810 logs.go:282] 4 containers: [2185f5e6d22f 8f326d783203 0793dc6a8b3c faf5a5d400ff]
	I1025 18:53:39.890861    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:53:39.903640    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:53:39.903710    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:53:39.914815    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:53:39.914886    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:53:39.926801    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:53:39.926870    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:53:39.937618    4810 logs.go:282] 0 containers: []
	W1025 18:53:39.937629    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:53:39.937684    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:53:39.948759    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:53:39.948780    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:53:39.948786    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:53:39.981069    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:53:39.981080    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:53:40.016080    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:53:40.016093    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:53:40.033713    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:53:40.033727    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:53:40.038191    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:53:40.038200    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:53:40.052433    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:53:40.052445    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:53:40.064301    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:53:40.064312    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:53:40.075917    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:53:40.075929    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:53:40.087997    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:53:40.088010    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:53:40.109503    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:53:40.109513    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:53:40.128762    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:53:40.128774    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:53:40.140799    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:53:40.140808    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:53:40.154756    4810 logs.go:123] Gathering logs for coredns [2185f5e6d22f] ...
	I1025 18:53:40.154767    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2185f5e6d22f"
	I1025 18:53:40.167011    4810 logs.go:123] Gathering logs for coredns [8f326d783203] ...
	I1025 18:53:40.167025    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f326d783203"
	I1025 18:53:40.178531    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:53:40.178543    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:53:42.704729    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:53:47.705583    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:53:47.706162    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:53:47.743900    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:53:47.744044    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:53:47.766068    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:53:47.766184    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:53:47.782551    4810 logs.go:282] 4 containers: [2185f5e6d22f 8f326d783203 0793dc6a8b3c faf5a5d400ff]
	I1025 18:53:47.782637    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:53:47.800008    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:53:47.800085    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:53:47.810833    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:53:47.810910    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:53:47.821670    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:53:47.821744    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:53:47.834831    4810 logs.go:282] 0 containers: []
	W1025 18:53:47.834841    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:53:47.834898    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:53:47.845695    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:53:47.845717    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:53:47.845723    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:53:47.857334    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:53:47.857346    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:53:47.881028    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:53:47.881038    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:53:47.892486    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:53:47.892500    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:53:47.928032    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:53:47.928044    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:53:47.942609    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:53:47.942623    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:53:47.957053    4810 logs.go:123] Gathering logs for coredns [8f326d783203] ...
	I1025 18:53:47.957065    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f326d783203"
	I1025 18:53:47.968986    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:53:47.969000    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:53:48.000699    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:53:48.000708    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:53:48.005429    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:53:48.005438    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:53:48.020243    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:53:48.020255    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:53:48.032277    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:53:48.032286    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:53:48.056683    4810 logs.go:123] Gathering logs for coredns [2185f5e6d22f] ...
	I1025 18:53:48.056696    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2185f5e6d22f"
	I1025 18:53:48.068502    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:53:48.068516    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:53:48.080179    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:53:48.080191    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:53:50.594643    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:53:55.596009    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:53:55.596139    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:53:55.608715    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:53:55.608774    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:53:55.624978    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:53:55.625085    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:53:55.640742    4810 logs.go:282] 4 containers: [2185f5e6d22f 8f326d783203 0793dc6a8b3c faf5a5d400ff]
	I1025 18:53:55.640820    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:53:55.653544    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:53:55.653604    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:53:55.664695    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:53:55.664770    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:53:55.676079    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:53:55.676157    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:53:55.687797    4810 logs.go:282] 0 containers: []
	W1025 18:53:55.687811    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:53:55.687875    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:53:55.700422    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:53:55.700441    4810 logs.go:123] Gathering logs for coredns [8f326d783203] ...
	I1025 18:53:55.700446    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f326d783203"
	I1025 18:53:55.712958    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:53:55.712970    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:53:55.726809    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:53:55.726821    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:53:55.751454    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:53:55.751468    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:53:55.756860    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:53:55.756871    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:53:55.771677    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:53:55.771691    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:53:55.786043    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:53:55.786053    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:53:55.800370    4810 logs.go:123] Gathering logs for coredns [2185f5e6d22f] ...
	I1025 18:53:55.800383    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2185f5e6d22f"
	I1025 18:53:55.813376    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:53:55.813386    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:53:55.829203    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:53:55.829218    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:53:55.851186    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:53:55.851201    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:53:55.869381    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:53:55.869395    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:53:55.902264    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:53:55.902279    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:53:55.938242    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:53:55.938253    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:53:55.951574    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:53:55.951589    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:53:58.466730    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:54:03.469534    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:54:03.469721    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:54:03.487265    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:54:03.487366    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:54:03.500811    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:54:03.500889    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:54:03.513628    4810 logs.go:282] 4 containers: [2185f5e6d22f 8f326d783203 0793dc6a8b3c faf5a5d400ff]
	I1025 18:54:03.513716    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:54:03.526636    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:54:03.526709    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:54:03.539058    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:54:03.539142    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:54:03.550963    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:54:03.551041    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:54:03.562963    4810 logs.go:282] 0 containers: []
	W1025 18:54:03.562975    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:54:03.563038    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:54:03.582489    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:54:03.582510    4810 logs.go:123] Gathering logs for coredns [2185f5e6d22f] ...
	I1025 18:54:03.582516    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2185f5e6d22f"
	I1025 18:54:03.599944    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:54:03.599957    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:54:03.613764    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:54:03.613779    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:54:03.627236    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:54:03.627253    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:54:03.644108    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:54:03.644118    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:54:03.668937    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:54:03.668945    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:54:03.700405    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:54:03.700417    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:54:03.705469    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:54:03.705477    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:54:03.720342    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:54:03.720353    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:54:03.757652    4810 logs.go:123] Gathering logs for coredns [8f326d783203] ...
	I1025 18:54:03.757662    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f326d783203"
	I1025 18:54:03.769486    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:54:03.769497    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:54:03.781065    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:54:03.781077    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:54:03.795790    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:54:03.795803    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:54:03.811220    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:54:03.811230    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:54:03.830485    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:54:03.830496    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:54:06.342734    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:54:11.345201    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:54:11.345706    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:54:11.386261    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:54:11.386455    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:54:11.407794    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:54:11.407912    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:54:11.423704    4810 logs.go:282] 4 containers: [2185f5e6d22f 8f326d783203 0793dc6a8b3c faf5a5d400ff]
	I1025 18:54:11.423790    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:54:11.440376    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:54:11.440450    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:54:11.451640    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:54:11.451708    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:54:11.462247    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:54:11.462312    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:54:11.472307    4810 logs.go:282] 0 containers: []
	W1025 18:54:11.472321    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:54:11.472388    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:54:11.482904    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:54:11.482924    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:54:11.482929    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:54:11.495897    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:54:11.495909    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:54:11.507661    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:54:11.507671    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:54:11.537300    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:54:11.537310    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:54:11.541777    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:54:11.541785    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:54:11.556038    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:54:11.556049    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:54:11.569937    4810 logs.go:123] Gathering logs for coredns [8f326d783203] ...
	I1025 18:54:11.569946    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f326d783203"
	I1025 18:54:11.582312    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:54:11.582323    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:54:11.594614    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:54:11.594624    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:54:11.607267    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:54:11.607278    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:54:11.624036    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:54:11.624047    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:54:11.641673    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:54:11.641685    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:54:11.677369    4810 logs.go:123] Gathering logs for coredns [2185f5e6d22f] ...
	I1025 18:54:11.677380    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2185f5e6d22f"
	I1025 18:54:11.689932    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:54:11.689944    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:54:11.713569    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:54:11.713576    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:54:14.230449    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:54:19.232928    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:54:19.233038    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:54:19.244965    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:54:19.245042    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:54:19.256539    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:54:19.256626    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:54:19.268774    4810 logs.go:282] 4 containers: [2185f5e6d22f 8f326d783203 0793dc6a8b3c faf5a5d400ff]
	I1025 18:54:19.268858    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:54:19.279842    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:54:19.279915    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:54:19.291574    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:54:19.291671    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:54:19.303116    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:54:19.303196    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:54:19.314499    4810 logs.go:282] 0 containers: []
	W1025 18:54:19.314514    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:54:19.314580    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:54:19.326016    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:54:19.326032    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:54:19.326037    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:54:19.339747    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:54:19.339761    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:54:19.344875    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:54:19.344885    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:54:19.359312    4810 logs.go:123] Gathering logs for coredns [2185f5e6d22f] ...
	I1025 18:54:19.359325    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2185f5e6d22f"
	I1025 18:54:19.371389    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:54:19.371400    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:54:19.385567    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:54:19.385578    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:54:19.410379    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:54:19.410394    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:54:19.423118    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:54:19.423130    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:54:19.461504    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:54:19.461516    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:54:19.478084    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:54:19.478105    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:54:19.497474    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:54:19.497482    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:54:19.513099    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:54:19.513111    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:54:19.531423    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:54:19.531437    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:54:19.562788    4810 logs.go:123] Gathering logs for coredns [8f326d783203] ...
	I1025 18:54:19.562805    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f326d783203"
	I1025 18:54:19.588213    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:54:19.588223    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:54:22.101436    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:54:27.103720    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:54:27.104293    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:54:27.141540    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:54:27.141691    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:54:27.161981    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:54:27.162092    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:54:27.177522    4810 logs.go:282] 4 containers: [2185f5e6d22f 8f326d783203 0793dc6a8b3c faf5a5d400ff]
	I1025 18:54:27.177609    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:54:27.189637    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:54:27.189713    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:54:27.203024    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:54:27.203097    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:54:27.213828    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:54:27.213896    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:54:27.225589    4810 logs.go:282] 0 containers: []
	W1025 18:54:27.225603    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:54:27.225672    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:54:27.237004    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:54:27.237022    4810 logs.go:123] Gathering logs for coredns [2185f5e6d22f] ...
	I1025 18:54:27.237028    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2185f5e6d22f"
	I1025 18:54:27.251023    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:54:27.251034    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:54:27.255214    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:54:27.255223    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:54:27.269195    4810 logs.go:123] Gathering logs for coredns [8f326d783203] ...
	I1025 18:54:27.269207    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f326d783203"
	I1025 18:54:27.281416    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:54:27.281429    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:54:27.293323    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:54:27.293337    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:54:27.324701    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:54:27.324709    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:54:27.340147    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:54:27.340159    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:54:27.351866    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:54:27.351877    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:54:27.363678    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:54:27.363689    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:54:27.375032    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:54:27.375042    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:54:27.386585    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:54:27.386596    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:54:27.420328    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:54:27.420337    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:54:27.435696    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:54:27.435710    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:54:27.453674    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:54:27.453685    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:54:29.980580    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:54:34.983336    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:54:34.983613    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:54:35.008991    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:54:35.009122    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:54:35.026253    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:54:35.026347    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:54:35.038897    4810 logs.go:282] 4 containers: [2185f5e6d22f 8f326d783203 0793dc6a8b3c faf5a5d400ff]
	I1025 18:54:35.038983    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:54:35.049966    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:54:35.050032    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:54:35.060035    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:54:35.060115    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:54:35.070761    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:54:35.070843    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:54:35.081368    4810 logs.go:282] 0 containers: []
	W1025 18:54:35.081380    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:54:35.081446    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:54:35.091493    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:54:35.091510    4810 logs.go:123] Gathering logs for coredns [8f326d783203] ...
	I1025 18:54:35.091515    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f326d783203"
	I1025 18:54:35.103575    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:54:35.103585    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:54:35.114609    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:54:35.114623    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:54:35.118824    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:54:35.118833    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:54:35.153490    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:54:35.153503    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:54:35.167274    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:54:35.167284    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:54:35.179352    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:54:35.179362    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:54:35.190548    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:54:35.190559    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:54:35.204774    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:54:35.204787    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:54:35.219984    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:54:35.219993    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:54:35.238481    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:54:35.238490    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:54:35.249681    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:54:35.249694    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:54:35.274133    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:54:35.274142    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:54:35.303886    4810 logs.go:123] Gathering logs for coredns [2185f5e6d22f] ...
	I1025 18:54:35.303896    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2185f5e6d22f"
	I1025 18:54:35.315304    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:54:35.315315    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:54:37.828746    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:54:42.831560    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:54:42.831641    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:54:42.842961    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:54:42.843039    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:54:42.854530    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:54:42.854609    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:54:42.866264    4810 logs.go:282] 4 containers: [2185f5e6d22f 8f326d783203 0793dc6a8b3c faf5a5d400ff]
	I1025 18:54:42.866363    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:54:42.877043    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:54:42.877232    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:54:42.889307    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:54:42.889382    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:54:42.905690    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:54:42.905788    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:54:42.916769    4810 logs.go:282] 0 containers: []
	W1025 18:54:42.916781    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:54:42.916850    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:54:42.929002    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:54:42.929018    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:54:42.929023    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:54:42.944704    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:54:42.944716    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:54:42.959835    4810 logs.go:123] Gathering logs for coredns [8f326d783203] ...
	I1025 18:54:42.959843    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f326d783203"
	I1025 18:54:42.971687    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:54:42.971699    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:54:42.995276    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:54:42.995290    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:54:43.007384    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:54:43.007392    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:54:43.031709    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:54:43.031725    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:54:43.038334    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:54:43.038344    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:54:43.053326    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:54:43.053338    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:54:43.065430    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:54:43.065443    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:54:43.077393    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:54:43.077405    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:54:43.109168    4810 logs.go:123] Gathering logs for coredns [2185f5e6d22f] ...
	I1025 18:54:43.109185    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2185f5e6d22f"
	I1025 18:54:43.121922    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:54:43.121935    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:54:43.133924    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:54:43.133933    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:54:43.173202    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:54:43.173213    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:54:45.690668    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:54:50.693071    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:54:50.693668    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:54:50.740403    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:54:50.740551    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:54:50.761643    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:54:50.761753    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:54:50.779071    4810 logs.go:282] 4 containers: [2185f5e6d22f 8f326d783203 0793dc6a8b3c faf5a5d400ff]
	I1025 18:54:50.779160    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:54:50.791059    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:54:50.791135    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:54:50.802198    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:54:50.802273    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:54:50.812978    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:54:50.813044    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:54:50.828314    4810 logs.go:282] 0 containers: []
	W1025 18:54:50.828327    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:54:50.828391    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:54:50.838274    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:54:50.838290    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:54:50.838295    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:54:50.867513    4810 logs.go:123] Gathering logs for coredns [8f326d783203] ...
	I1025 18:54:50.867521    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f326d783203"
	I1025 18:54:50.880481    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:54:50.880493    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:54:50.894196    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:54:50.894210    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:54:50.908054    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:54:50.908067    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:54:50.925512    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:54:50.925523    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:54:50.944807    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:54:50.944819    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:54:50.949559    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:54:50.949566    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:54:50.963634    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:54:50.963648    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:54:50.977208    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:54:50.977220    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:54:51.000837    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:54:51.000848    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:54:51.012386    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:54:51.012399    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:54:51.047607    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:54:51.047621    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:54:51.061696    4810 logs.go:123] Gathering logs for coredns [2185f5e6d22f] ...
	I1025 18:54:51.061710    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2185f5e6d22f"
	I1025 18:54:51.073301    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:54:51.073312    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:54:53.590285    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:54:58.593407    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:54:58.593979    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1025 18:54:58.632837    4810 logs.go:282] 1 containers: [3e8882f6eb16]
	I1025 18:54:58.632975    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1025 18:54:58.659116    4810 logs.go:282] 1 containers: [977a2f25492a]
	I1025 18:54:58.659233    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1025 18:54:58.674367    4810 logs.go:282] 4 containers: [2185f5e6d22f 8f326d783203 0793dc6a8b3c faf5a5d400ff]
	I1025 18:54:58.674448    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1025 18:54:58.690853    4810 logs.go:282] 1 containers: [4119eab86575]
	I1025 18:54:58.690930    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1025 18:54:58.701611    4810 logs.go:282] 1 containers: [a4f080b69419]
	I1025 18:54:58.701688    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1025 18:54:58.712349    4810 logs.go:282] 1 containers: [9a02c3ad8f29]
	I1025 18:54:58.712422    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1025 18:54:58.722843    4810 logs.go:282] 0 containers: []
	W1025 18:54:58.722854    4810 logs.go:284] No container was found matching "kindnet"
	I1025 18:54:58.722919    4810 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1025 18:54:58.733459    4810 logs.go:282] 1 containers: [be4239685473]
	I1025 18:54:58.733475    4810 logs.go:123] Gathering logs for dmesg ...
	I1025 18:54:58.733480    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 18:54:58.737980    4810 logs.go:123] Gathering logs for coredns [8f326d783203] ...
	I1025 18:54:58.737990    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f326d783203"
	I1025 18:54:58.750304    4810 logs.go:123] Gathering logs for coredns [0793dc6a8b3c] ...
	I1025 18:54:58.750317    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0793dc6a8b3c"
	I1025 18:54:58.762122    4810 logs.go:123] Gathering logs for kubelet ...
	I1025 18:54:58.762134    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 18:54:58.794132    4810 logs.go:123] Gathering logs for describe nodes ...
	I1025 18:54:58.794144    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1025 18:54:58.827934    4810 logs.go:123] Gathering logs for coredns [2185f5e6d22f] ...
	I1025 18:54:58.827949    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2185f5e6d22f"
	I1025 18:54:58.839304    4810 logs.go:123] Gathering logs for kube-controller-manager [9a02c3ad8f29] ...
	I1025 18:54:58.839318    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a02c3ad8f29"
	I1025 18:54:58.856986    4810 logs.go:123] Gathering logs for storage-provisioner [be4239685473] ...
	I1025 18:54:58.856997    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 be4239685473"
	I1025 18:54:58.874203    4810 logs.go:123] Gathering logs for Docker ...
	I1025 18:54:58.874213    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1025 18:54:58.897389    4810 logs.go:123] Gathering logs for etcd [977a2f25492a] ...
	I1025 18:54:58.897398    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 977a2f25492a"
	I1025 18:54:58.911294    4810 logs.go:123] Gathering logs for coredns [faf5a5d400ff] ...
	I1025 18:54:58.911304    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf5a5d400ff"
	I1025 18:54:58.923421    4810 logs.go:123] Gathering logs for container status ...
	I1025 18:54:58.923432    4810 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 18:54:58.934770    4810 logs.go:123] Gathering logs for kube-apiserver [3e8882f6eb16] ...
	I1025 18:54:58.934781    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e8882f6eb16"
	I1025 18:54:58.949360    4810 logs.go:123] Gathering logs for kube-scheduler [4119eab86575] ...
	I1025 18:54:58.949370    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4119eab86575"
	I1025 18:54:58.963605    4810 logs.go:123] Gathering logs for kube-proxy [a4f080b69419] ...
	I1025 18:54:58.963614    4810 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a4f080b69419"
	I1025 18:55:01.479231    4810 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1025 18:55:06.482221    4810 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1025 18:55:06.487678    4810 out.go:201] 
	W1025 18:55:06.489379    4810 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1025 18:55:06.489385    4810 out.go:270] * 
	* 
	W1025 18:55:06.489846    4810 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:55:06.502512    4810 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-473000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (575.81s)

                                                
                                    
x
+
TestPause/serial/Start (9.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-797000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-797000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.90477225s)

                                                
                                                
-- stdout --
	* [pause-797000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-797000" primary control-plane node in "pause-797000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-797000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-797000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-797000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-797000 -n pause-797000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-797000 -n pause-797000: exit status 7 (72.48625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-797000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-240000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-240000 --driver=qemu2 : exit status 80 (9.823138958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-240000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-240000" primary control-plane node in "NoKubernetes-240000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-240000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-240000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-240000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-240000 -n NoKubernetes-240000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-240000 -n NoKubernetes-240000: exit status 7 (38.539166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-240000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-240000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-240000 --no-kubernetes --driver=qemu2 : exit status 80 (5.254193916s)

                                                
                                                
-- stdout --
	* [NoKubernetes-240000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-240000
	* Restarting existing qemu2 VM for "NoKubernetes-240000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-240000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-240000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-240000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-240000 -n NoKubernetes-240000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-240000 -n NoKubernetes-240000: exit status 7 (50.340625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-240000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-240000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-240000 --no-kubernetes --driver=qemu2 : exit status 80 (5.259256167s)

                                                
                                                
-- stdout --
	* [NoKubernetes-240000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-240000
	* Restarting existing qemu2 VM for "NoKubernetes-240000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-240000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-240000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-240000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-240000 -n NoKubernetes-240000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-240000 -n NoKubernetes-240000: exit status 7 (74.623792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-240000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-240000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-240000 --driver=qemu2 : exit status 80 (5.251465667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-240000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-240000
	* Restarting existing qemu2 VM for "NoKubernetes-240000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-240000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-240000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-240000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-240000 -n NoKubernetes-240000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-240000 -n NoKubernetes-240000: exit status 7 (69.770708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-240000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-660000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-660000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.809163916s)

                                                
                                                
-- stdout --
	* [auto-660000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-660000" primary control-plane node in "auto-660000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-660000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:53:19.200923    5125 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:53:19.201082    5125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:53:19.201085    5125 out.go:358] Setting ErrFile to fd 2...
	I1025 18:53:19.201088    5125 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:53:19.201208    5125 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:53:19.202418    5125 out.go:352] Setting JSON to false
	I1025 18:53:19.220551    5125 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4969,"bootTime":1729902630,"procs":559,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:53:19.220629    5125 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:53:19.226911    5125 out.go:177] * [auto-660000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:53:19.234940    5125 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:53:19.234947    5125 notify.go:220] Checking for updates...
	I1025 18:53:19.241733    5125 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:53:19.244753    5125 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:53:19.247680    5125 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:53:19.250703    5125 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:53:19.253763    5125 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:53:19.257019    5125 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:53:19.257096    5125 config.go:182] Loaded profile config "stopped-upgrade-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:53:19.257149    5125 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:53:19.261704    5125 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:53:19.268747    5125 start.go:297] selected driver: qemu2
	I1025 18:53:19.268753    5125 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:53:19.268761    5125 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:53:19.271221    5125 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:53:19.274672    5125 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:53:19.277875    5125 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:53:19.277892    5125 cni.go:84] Creating CNI manager for ""
	I1025 18:53:19.277911    5125 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:53:19.277918    5125 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 18:53:19.277953    5125 start.go:340] cluster config:
	{Name:auto-660000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:auto-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:53:19.282231    5125 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:53:19.290755    5125 out.go:177] * Starting "auto-660000" primary control-plane node in "auto-660000" cluster
	I1025 18:53:19.293699    5125 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:53:19.293713    5125 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:53:19.293722    5125 cache.go:56] Caching tarball of preloaded images
	I1025 18:53:19.293821    5125 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:53:19.293834    5125 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:53:19.293889    5125 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/auto-660000/config.json ...
	I1025 18:53:19.293899    5125 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/auto-660000/config.json: {Name:mk7a35b5c1abd291b688176f6ada0a809dcd793e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:53:19.294228    5125 start.go:360] acquireMachinesLock for auto-660000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:53:19.294276    5125 start.go:364] duration metric: took 43.708µs to acquireMachinesLock for "auto-660000"
	I1025 18:53:19.294288    5125 start.go:93] Provisioning new machine with config: &{Name:auto-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:53:19.294321    5125 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:53:19.302519    5125 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 18:53:19.317416    5125 start.go:159] libmachine.API.Create for "auto-660000" (driver="qemu2")
	I1025 18:53:19.317444    5125 client.go:168] LocalClient.Create starting
	I1025 18:53:19.317508    5125 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:53:19.317552    5125 main.go:141] libmachine: Decoding PEM data...
	I1025 18:53:19.317565    5125 main.go:141] libmachine: Parsing certificate...
	I1025 18:53:19.317602    5125 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:53:19.317634    5125 main.go:141] libmachine: Decoding PEM data...
	I1025 18:53:19.317642    5125 main.go:141] libmachine: Parsing certificate...
	I1025 18:53:19.318084    5125 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:53:19.472536    5125 main.go:141] libmachine: Creating SSH key...
	I1025 18:53:19.584367    5125 main.go:141] libmachine: Creating Disk image...
	I1025 18:53:19.584377    5125 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:53:19.584580    5125 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/auto-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/auto-660000/disk.qcow2
	I1025 18:53:19.594957    5125 main.go:141] libmachine: STDOUT: 
	I1025 18:53:19.594979    5125 main.go:141] libmachine: STDERR: 
	I1025 18:53:19.595039    5125 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/auto-660000/disk.qcow2 +20000M
	I1025 18:53:19.603916    5125 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:53:19.603933    5125 main.go:141] libmachine: STDERR: 
	I1025 18:53:19.603952    5125 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/auto-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/auto-660000/disk.qcow2
	I1025 18:53:19.603956    5125 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:53:19.603969    5125 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:53:19.603992    5125 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/auto-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/auto-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/auto-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:c5:fa:11:87:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/auto-660000/disk.qcow2
	I1025 18:53:19.605842    5125 main.go:141] libmachine: STDOUT: 
	I1025 18:53:19.605856    5125 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:53:19.605875    5125 client.go:171] duration metric: took 288.418625ms to LocalClient.Create
	I1025 18:53:21.608051    5125 start.go:128] duration metric: took 2.313658459s to createHost
	I1025 18:53:21.608089    5125 start.go:83] releasing machines lock for "auto-660000", held for 2.313750459s
	W1025 18:53:21.608131    5125 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:53:21.618833    5125 out.go:177] * Deleting "auto-660000" in qemu2 ...
	W1025 18:53:21.638044    5125 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:53:21.638069    5125 start.go:729] Will try again in 5 seconds ...
	I1025 18:53:26.640424    5125 start.go:360] acquireMachinesLock for auto-660000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:53:26.641051    5125 start.go:364] duration metric: took 530.125µs to acquireMachinesLock for "auto-660000"
	I1025 18:53:26.641168    5125 start.go:93] Provisioning new machine with config: &{Name:auto-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:53:26.641464    5125 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:53:26.647174    5125 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 18:53:26.697846    5125 start.go:159] libmachine.API.Create for "auto-660000" (driver="qemu2")
	I1025 18:53:26.697905    5125 client.go:168] LocalClient.Create starting
	I1025 18:53:26.698050    5125 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:53:26.698133    5125 main.go:141] libmachine: Decoding PEM data...
	I1025 18:53:26.698163    5125 main.go:141] libmachine: Parsing certificate...
	I1025 18:53:26.698239    5125 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:53:26.698299    5125 main.go:141] libmachine: Decoding PEM data...
	I1025 18:53:26.698312    5125 main.go:141] libmachine: Parsing certificate...
	I1025 18:53:26.699043    5125 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:53:26.866537    5125 main.go:141] libmachine: Creating SSH key...
	I1025 18:53:26.911812    5125 main.go:141] libmachine: Creating Disk image...
	I1025 18:53:26.911818    5125 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:53:26.912013    5125 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/auto-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/auto-660000/disk.qcow2
	I1025 18:53:26.922339    5125 main.go:141] libmachine: STDOUT: 
	I1025 18:53:26.922360    5125 main.go:141] libmachine: STDERR: 
	I1025 18:53:26.922419    5125 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/auto-660000/disk.qcow2 +20000M
	I1025 18:53:26.931320    5125 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:53:26.931337    5125 main.go:141] libmachine: STDERR: 
	I1025 18:53:26.931351    5125 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/auto-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/auto-660000/disk.qcow2
	I1025 18:53:26.931355    5125 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:53:26.931365    5125 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:53:26.931398    5125 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/auto-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/auto-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/auto-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:2c:a7:6d:d5:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/auto-660000/disk.qcow2
	I1025 18:53:26.933260    5125 main.go:141] libmachine: STDOUT: 
	I1025 18:53:26.933275    5125 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:53:26.933296    5125 client.go:171] duration metric: took 235.378375ms to LocalClient.Create
	I1025 18:53:28.935539    5125 start.go:128] duration metric: took 2.293983833s to createHost
	I1025 18:53:28.935607    5125 start.go:83] releasing machines lock for "auto-660000", held for 2.29447775s
	W1025 18:53:28.935937    5125 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:53:28.948518    5125 out.go:201] 
	W1025 18:53:28.952684    5125 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:53:28.952724    5125 out.go:270] * 
	* 
	W1025 18:53:28.955509    5125 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:53:28.965545    5125 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-660000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-660000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.846441292s)

                                                
                                                
-- stdout --
	* [kindnet-660000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-660000" primary control-plane node in "kindnet-660000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-660000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:53:31.364252    5234 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:53:31.364413    5234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:53:31.364417    5234 out.go:358] Setting ErrFile to fd 2...
	I1025 18:53:31.364419    5234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:53:31.364551    5234 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:53:31.365797    5234 out.go:352] Setting JSON to false
	I1025 18:53:31.384610    5234 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4981,"bootTime":1729902630,"procs":559,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:53:31.384714    5234 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:53:31.389757    5234 out.go:177] * [kindnet-660000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:53:31.396740    5234 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:53:31.396802    5234 notify.go:220] Checking for updates...
	I1025 18:53:31.403736    5234 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:53:31.406725    5234 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:53:31.409702    5234 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:53:31.412666    5234 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:53:31.415685    5234 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:53:31.419115    5234 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:53:31.419182    5234 config.go:182] Loaded profile config "stopped-upgrade-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:53:31.419251    5234 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:53:31.422670    5234 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:53:31.429781    5234 start.go:297] selected driver: qemu2
	I1025 18:53:31.429787    5234 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:53:31.429794    5234 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:53:31.432147    5234 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:53:31.433775    5234 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:53:31.436708    5234 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:53:31.436728    5234 cni.go:84] Creating CNI manager for "kindnet"
	I1025 18:53:31.436731    5234 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 18:53:31.436751    5234 start.go:340] cluster config:
	{Name:kindnet-660000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kindnet-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:53:31.440925    5234 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:53:31.449734    5234 out.go:177] * Starting "kindnet-660000" primary control-plane node in "kindnet-660000" cluster
	I1025 18:53:31.453718    5234 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:53:31.453733    5234 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:53:31.453745    5234 cache.go:56] Caching tarball of preloaded images
	I1025 18:53:31.453820    5234 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:53:31.453825    5234 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:53:31.453885    5234 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/kindnet-660000/config.json ...
	I1025 18:53:31.453895    5234 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/kindnet-660000/config.json: {Name:mkaa3b0a8517d862f4d6c6e517b879e545978e8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:53:31.454135    5234 start.go:360] acquireMachinesLock for kindnet-660000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:53:31.454179    5234 start.go:364] duration metric: took 38.708µs to acquireMachinesLock for "kindnet-660000"
	I1025 18:53:31.454191    5234 start.go:93] Provisioning new machine with config: &{Name:kindnet-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:53:31.454222    5234 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:53:31.462724    5234 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 18:53:31.477376    5234 start.go:159] libmachine.API.Create for "kindnet-660000" (driver="qemu2")
	I1025 18:53:31.477401    5234 client.go:168] LocalClient.Create starting
	I1025 18:53:31.477478    5234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:53:31.477523    5234 main.go:141] libmachine: Decoding PEM data...
	I1025 18:53:31.477536    5234 main.go:141] libmachine: Parsing certificate...
	I1025 18:53:31.477576    5234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:53:31.477606    5234 main.go:141] libmachine: Decoding PEM data...
	I1025 18:53:31.477614    5234 main.go:141] libmachine: Parsing certificate...
	I1025 18:53:31.477973    5234 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:53:31.633325    5234 main.go:141] libmachine: Creating SSH key...
	I1025 18:53:31.716319    5234 main.go:141] libmachine: Creating Disk image...
	I1025 18:53:31.716325    5234 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:53:31.716521    5234 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kindnet-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kindnet-660000/disk.qcow2
	I1025 18:53:31.726455    5234 main.go:141] libmachine: STDOUT: 
	I1025 18:53:31.726474    5234 main.go:141] libmachine: STDERR: 
	I1025 18:53:31.726530    5234 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kindnet-660000/disk.qcow2 +20000M
	I1025 18:53:31.735045    5234 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:53:31.735068    5234 main.go:141] libmachine: STDERR: 
	I1025 18:53:31.735082    5234 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kindnet-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kindnet-660000/disk.qcow2
	I1025 18:53:31.735088    5234 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:53:31.735100    5234 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:53:31.735126    5234 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kindnet-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kindnet-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kindnet-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:fb:95:39:53:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kindnet-660000/disk.qcow2
	I1025 18:53:31.736961    5234 main.go:141] libmachine: STDOUT: 
	I1025 18:53:31.736974    5234 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:53:31.736994    5234 client.go:171] duration metric: took 259.580417ms to LocalClient.Create
	I1025 18:53:33.739241    5234 start.go:128] duration metric: took 2.2849355s to createHost
	I1025 18:53:33.739326    5234 start.go:83] releasing machines lock for "kindnet-660000", held for 2.285085916s
	W1025 18:53:33.739383    5234 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:53:33.746232    5234 out.go:177] * Deleting "kindnet-660000" in qemu2 ...
	W1025 18:53:33.771587    5234 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:53:33.771619    5234 start.go:729] Will try again in 5 seconds ...
	I1025 18:53:38.773953    5234 start.go:360] acquireMachinesLock for kindnet-660000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:53:38.774563    5234 start.go:364] duration metric: took 472.833µs to acquireMachinesLock for "kindnet-660000"
	I1025 18:53:38.774740    5234 start.go:93] Provisioning new machine with config: &{Name:kindnet-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:53:38.775064    5234 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:53:38.784785    5234 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 18:53:38.833240    5234 start.go:159] libmachine.API.Create for "kindnet-660000" (driver="qemu2")
	I1025 18:53:38.833294    5234 client.go:168] LocalClient.Create starting
	I1025 18:53:38.833445    5234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:53:38.833525    5234 main.go:141] libmachine: Decoding PEM data...
	I1025 18:53:38.833543    5234 main.go:141] libmachine: Parsing certificate...
	I1025 18:53:38.833612    5234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:53:38.833676    5234 main.go:141] libmachine: Decoding PEM data...
	I1025 18:53:38.833692    5234 main.go:141] libmachine: Parsing certificate...
	I1025 18:53:38.834276    5234 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:53:38.999928    5234 main.go:141] libmachine: Creating SSH key...
	I1025 18:53:39.113449    5234 main.go:141] libmachine: Creating Disk image...
	I1025 18:53:39.113458    5234 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:53:39.113665    5234 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kindnet-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kindnet-660000/disk.qcow2
	I1025 18:53:39.123856    5234 main.go:141] libmachine: STDOUT: 
	I1025 18:53:39.123878    5234 main.go:141] libmachine: STDERR: 
	I1025 18:53:39.123930    5234 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kindnet-660000/disk.qcow2 +20000M
	I1025 18:53:39.132366    5234 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:53:39.132380    5234 main.go:141] libmachine: STDERR: 
	I1025 18:53:39.132390    5234 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kindnet-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kindnet-660000/disk.qcow2
	I1025 18:53:39.132394    5234 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:53:39.132404    5234 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:53:39.132425    5234 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kindnet-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kindnet-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kindnet-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:17:49:c6:86:87 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kindnet-660000/disk.qcow2
	I1025 18:53:39.134184    5234 main.go:141] libmachine: STDOUT: 
	I1025 18:53:39.134198    5234 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:53:39.134208    5234 client.go:171] duration metric: took 300.900708ms to LocalClient.Create
	I1025 18:53:41.136460    5234 start.go:128] duration metric: took 2.361305791s to createHost
	I1025 18:53:41.136576    5234 start.go:83] releasing machines lock for "kindnet-660000", held for 2.361898125s
	W1025 18:53:41.136872    5234 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:53:41.146507    5234 out.go:201] 
	W1025 18:53:41.151786    5234 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:53:41.151872    5234 out.go:270] * 
	* 
	W1025 18:53:41.154775    5234 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:53:41.163499    5234 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-660000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-660000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.796697209s)

                                                
                                                
-- stdout --
	* [calico-660000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-660000" primary control-plane node in "calico-660000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-660000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:53:43.598972    5351 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:53:43.599123    5351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:53:43.599126    5351 out.go:358] Setting ErrFile to fd 2...
	I1025 18:53:43.599128    5351 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:53:43.599278    5351 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:53:43.600442    5351 out.go:352] Setting JSON to false
	I1025 18:53:43.618306    5351 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4993,"bootTime":1729902630,"procs":561,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:53:43.618378    5351 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:53:43.624481    5351 out.go:177] * [calico-660000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:53:43.632672    5351 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:53:43.632737    5351 notify.go:220] Checking for updates...
	I1025 18:53:43.640571    5351 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:53:43.643633    5351 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:53:43.646535    5351 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:53:43.649583    5351 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:53:43.652637    5351 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:53:43.654416    5351 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:53:43.654498    5351 config.go:182] Loaded profile config "stopped-upgrade-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:53:43.654548    5351 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:53:43.658582    5351 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:53:43.665474    5351 start.go:297] selected driver: qemu2
	I1025 18:53:43.665482    5351 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:53:43.665489    5351 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:53:43.667937    5351 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:53:43.670533    5351 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:53:43.673676    5351 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:53:43.673694    5351 cni.go:84] Creating CNI manager for "calico"
	I1025 18:53:43.673698    5351 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1025 18:53:43.673737    5351 start.go:340] cluster config:
	{Name:calico-660000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:calico-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:53:43.678516    5351 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:53:43.686563    5351 out.go:177] * Starting "calico-660000" primary control-plane node in "calico-660000" cluster
	I1025 18:53:43.690584    5351 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:53:43.690599    5351 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:53:43.690607    5351 cache.go:56] Caching tarball of preloaded images
	I1025 18:53:43.690670    5351 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:53:43.690675    5351 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:53:43.690721    5351 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/calico-660000/config.json ...
	I1025 18:53:43.690731    5351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/calico-660000/config.json: {Name:mkafa9ecfa9553e0200d7679c4db566d218fb6e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:53:43.690962    5351 start.go:360] acquireMachinesLock for calico-660000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:53:43.691004    5351 start.go:364] duration metric: took 36.667µs to acquireMachinesLock for "calico-660000"
	I1025 18:53:43.691015    5351 start.go:93] Provisioning new machine with config: &{Name:calico-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:53:43.691052    5351 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:53:43.699599    5351 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 18:53:43.714038    5351 start.go:159] libmachine.API.Create for "calico-660000" (driver="qemu2")
	I1025 18:53:43.714065    5351 client.go:168] LocalClient.Create starting
	I1025 18:53:43.714134    5351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:53:43.714175    5351 main.go:141] libmachine: Decoding PEM data...
	I1025 18:53:43.714187    5351 main.go:141] libmachine: Parsing certificate...
	I1025 18:53:43.714234    5351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:53:43.714265    5351 main.go:141] libmachine: Decoding PEM data...
	I1025 18:53:43.714275    5351 main.go:141] libmachine: Parsing certificate...
	I1025 18:53:43.714625    5351 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:53:43.870557    5351 main.go:141] libmachine: Creating SSH key...
	I1025 18:53:43.926342    5351 main.go:141] libmachine: Creating Disk image...
	I1025 18:53:43.926348    5351 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:53:43.926538    5351 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/calico-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/calico-660000/disk.qcow2
	I1025 18:53:43.936748    5351 main.go:141] libmachine: STDOUT: 
	I1025 18:53:43.936770    5351 main.go:141] libmachine: STDERR: 
	I1025 18:53:43.936821    5351 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/calico-660000/disk.qcow2 +20000M
	I1025 18:53:43.945461    5351 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:53:43.945477    5351 main.go:141] libmachine: STDERR: 
	I1025 18:53:43.945492    5351 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/calico-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/calico-660000/disk.qcow2
	I1025 18:53:43.945496    5351 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:53:43.945510    5351 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:53:43.945553    5351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/calico-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/calico-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/calico-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:1d:5f:42:2c:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/calico-660000/disk.qcow2
	I1025 18:53:43.947408    5351 main.go:141] libmachine: STDOUT: 
	I1025 18:53:43.947428    5351 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:53:43.947448    5351 client.go:171] duration metric: took 233.373583ms to LocalClient.Create
	I1025 18:53:45.948056    5351 start.go:128] duration metric: took 2.256930625s to createHost
	I1025 18:53:45.948129    5351 start.go:83] releasing machines lock for "calico-660000", held for 2.257064916s
	W1025 18:53:45.948164    5351 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:53:45.958505    5351 out.go:177] * Deleting "calico-660000" in qemu2 ...
	W1025 18:53:45.977299    5351 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:53:45.977312    5351 start.go:729] Will try again in 5 seconds ...
	I1025 18:53:50.979822    5351 start.go:360] acquireMachinesLock for calico-660000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:53:50.980540    5351 start.go:364] duration metric: took 431.708µs to acquireMachinesLock for "calico-660000"
	I1025 18:53:50.980610    5351 start.go:93] Provisioning new machine with config: &{Name:calico-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:53:50.980826    5351 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:53:50.989453    5351 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 18:53:51.027977    5351 start.go:159] libmachine.API.Create for "calico-660000" (driver="qemu2")
	I1025 18:53:51.028050    5351 client.go:168] LocalClient.Create starting
	I1025 18:53:51.028200    5351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:53:51.028291    5351 main.go:141] libmachine: Decoding PEM data...
	I1025 18:53:51.028308    5351 main.go:141] libmachine: Parsing certificate...
	I1025 18:53:51.028377    5351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:53:51.028428    5351 main.go:141] libmachine: Decoding PEM data...
	I1025 18:53:51.028439    5351 main.go:141] libmachine: Parsing certificate...
	I1025 18:53:51.029001    5351 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:53:51.194470    5351 main.go:141] libmachine: Creating SSH key...
	I1025 18:53:51.298892    5351 main.go:141] libmachine: Creating Disk image...
	I1025 18:53:51.298899    5351 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:53:51.299095    5351 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/calico-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/calico-660000/disk.qcow2
	I1025 18:53:51.309383    5351 main.go:141] libmachine: STDOUT: 
	I1025 18:53:51.309402    5351 main.go:141] libmachine: STDERR: 
	I1025 18:53:51.309455    5351 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/calico-660000/disk.qcow2 +20000M
	I1025 18:53:51.318111    5351 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:53:51.318126    5351 main.go:141] libmachine: STDERR: 
	I1025 18:53:51.318136    5351 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/calico-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/calico-660000/disk.qcow2
	I1025 18:53:51.318141    5351 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:53:51.318152    5351 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:53:51.318180    5351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/calico-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/calico-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/calico-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:b5:a4:39:08:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/calico-660000/disk.qcow2
	I1025 18:53:51.320025    5351 main.go:141] libmachine: STDOUT: 
	I1025 18:53:51.320039    5351 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:53:51.320051    5351 client.go:171] duration metric: took 291.987208ms to LocalClient.Create
	I1025 18:53:53.322297    5351 start.go:128] duration metric: took 2.341369583s to createHost
	I1025 18:53:53.322391    5351 start.go:83] releasing machines lock for "calico-660000", held for 2.341776s
	W1025 18:53:53.322731    5351 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:53:53.334415    5351 out.go:201] 
	W1025 18:53:53.337504    5351 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:53:53.337528    5351 out.go:270] * 
	* 
	W1025 18:53:53.340014    5351 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:53:53.349339    5351 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-660000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-660000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.861387417s)

                                                
                                                
-- stdout --
	* [custom-flannel-660000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-660000" primary control-plane node in "custom-flannel-660000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-660000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:53:55.975085    5471 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:53:55.975287    5471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:53:55.975291    5471 out.go:358] Setting ErrFile to fd 2...
	I1025 18:53:55.975293    5471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:53:55.975463    5471 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:53:55.976860    5471 out.go:352] Setting JSON to false
	I1025 18:53:55.995756    5471 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5005,"bootTime":1729902630,"procs":562,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:53:55.995832    5471 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:53:56.000774    5471 out.go:177] * [custom-flannel-660000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:53:56.008761    5471 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:53:56.008811    5471 notify.go:220] Checking for updates...
	I1025 18:53:56.014251    5471 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:53:56.017763    5471 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:53:56.020790    5471 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:53:56.023784    5471 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:53:56.026774    5471 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:53:56.030193    5471 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:53:56.030261    5471 config.go:182] Loaded profile config "stopped-upgrade-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:53:56.030308    5471 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:53:56.034763    5471 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:53:56.041773    5471 start.go:297] selected driver: qemu2
	I1025 18:53:56.041779    5471 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:53:56.041786    5471 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:53:56.044345    5471 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:53:56.046726    5471 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:53:56.049841    5471 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:53:56.049861    5471 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1025 18:53:56.049873    5471 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1025 18:53:56.049901    5471 start.go:340] cluster config:
	{Name:custom-flannel-660000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:53:56.054584    5471 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:53:56.062699    5471 out.go:177] * Starting "custom-flannel-660000" primary control-plane node in "custom-flannel-660000" cluster
	I1025 18:53:56.066779    5471 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:53:56.066795    5471 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:53:56.066808    5471 cache.go:56] Caching tarball of preloaded images
	I1025 18:53:56.066892    5471 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:53:56.066897    5471 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:53:56.066963    5471 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/custom-flannel-660000/config.json ...
	I1025 18:53:56.066977    5471 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/custom-flannel-660000/config.json: {Name:mk1b4b06a5d71d02427feea5bbb307be12a8b10a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:53:56.067214    5471 start.go:360] acquireMachinesLock for custom-flannel-660000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:53:56.067262    5471 start.go:364] duration metric: took 39.75µs to acquireMachinesLock for "custom-flannel-660000"
	I1025 18:53:56.067274    5471 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:53:56.067309    5471 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:53:56.075768    5471 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 18:53:56.092189    5471 start.go:159] libmachine.API.Create for "custom-flannel-660000" (driver="qemu2")
	I1025 18:53:56.092232    5471 client.go:168] LocalClient.Create starting
	I1025 18:53:56.092302    5471 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:53:56.092341    5471 main.go:141] libmachine: Decoding PEM data...
	I1025 18:53:56.092357    5471 main.go:141] libmachine: Parsing certificate...
	I1025 18:53:56.092393    5471 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:53:56.092424    5471 main.go:141] libmachine: Decoding PEM data...
	I1025 18:53:56.092434    5471 main.go:141] libmachine: Parsing certificate...
	I1025 18:53:56.092887    5471 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:53:56.248018    5471 main.go:141] libmachine: Creating SSH key...
	I1025 18:53:56.333932    5471 main.go:141] libmachine: Creating Disk image...
	I1025 18:53:56.333938    5471 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:53:56.334122    5471 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/custom-flannel-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/custom-flannel-660000/disk.qcow2
	I1025 18:53:56.344186    5471 main.go:141] libmachine: STDOUT: 
	I1025 18:53:56.344202    5471 main.go:141] libmachine: STDERR: 
	I1025 18:53:56.344278    5471 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/custom-flannel-660000/disk.qcow2 +20000M
	I1025 18:53:56.352929    5471 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:53:56.352944    5471 main.go:141] libmachine: STDERR: 
	I1025 18:53:56.352960    5471 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/custom-flannel-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/custom-flannel-660000/disk.qcow2
	I1025 18:53:56.352968    5471 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:53:56.352981    5471 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:53:56.353007    5471 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/custom-flannel-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/custom-flannel-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/custom-flannel-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:31:5b:8d:21:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/custom-flannel-660000/disk.qcow2
	I1025 18:53:56.354961    5471 main.go:141] libmachine: STDOUT: 
	I1025 18:53:56.354976    5471 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:53:56.354996    5471 client.go:171] duration metric: took 262.753167ms to LocalClient.Create
	I1025 18:53:58.357254    5471 start.go:128] duration metric: took 2.289861334s to createHost
	I1025 18:53:58.357357    5471 start.go:83] releasing machines lock for "custom-flannel-660000", held for 2.290032s
	W1025 18:53:58.357417    5471 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:53:58.374598    5471 out.go:177] * Deleting "custom-flannel-660000" in qemu2 ...
	W1025 18:53:58.400120    5471 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:53:58.400149    5471 start.go:729] Will try again in 5 seconds ...
	I1025 18:54:03.401149    5471 start.go:360] acquireMachinesLock for custom-flannel-660000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:54:03.401664    5471 start.go:364] duration metric: took 411µs to acquireMachinesLock for "custom-flannel-660000"
	I1025 18:54:03.401781    5471 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:54:03.401998    5471 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:54:03.411525    5471 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 18:54:03.452522    5471 start.go:159] libmachine.API.Create for "custom-flannel-660000" (driver="qemu2")
	I1025 18:54:03.452605    5471 client.go:168] LocalClient.Create starting
	I1025 18:54:03.452813    5471 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:54:03.452903    5471 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:03.452919    5471 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:03.452988    5471 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:54:03.453046    5471 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:03.453059    5471 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:03.453646    5471 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:54:03.628387    5471 main.go:141] libmachine: Creating SSH key...
	I1025 18:54:03.734914    5471 main.go:141] libmachine: Creating Disk image...
	I1025 18:54:03.734927    5471 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:54:03.735153    5471 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/custom-flannel-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/custom-flannel-660000/disk.qcow2
	I1025 18:54:03.746268    5471 main.go:141] libmachine: STDOUT: 
	I1025 18:54:03.746297    5471 main.go:141] libmachine: STDERR: 
	I1025 18:54:03.746378    5471 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/custom-flannel-660000/disk.qcow2 +20000M
	I1025 18:54:03.756577    5471 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:54:03.756598    5471 main.go:141] libmachine: STDERR: 
	I1025 18:54:03.756611    5471 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/custom-flannel-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/custom-flannel-660000/disk.qcow2
	I1025 18:54:03.756616    5471 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:54:03.756626    5471 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:54:03.756669    5471 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/custom-flannel-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/custom-flannel-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/custom-flannel-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:f7:d9:18:44:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/custom-flannel-660000/disk.qcow2
	I1025 18:54:03.758899    5471 main.go:141] libmachine: STDOUT: 
	I1025 18:54:03.758915    5471 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:54:03.758927    5471 client.go:171] duration metric: took 306.29725ms to LocalClient.Create
	I1025 18:54:05.761177    5471 start.go:128] duration metric: took 2.359092583s to createHost
	I1025 18:54:05.761270    5471 start.go:83] releasing machines lock for "custom-flannel-660000", held for 2.359530334s
	W1025 18:54:05.761625    5471 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:54:05.771149    5471 out.go:201] 
	W1025 18:54:05.778301    5471 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:54:05.778338    5471 out.go:270] * 
	* 
	W1025 18:54:05.781029    5471 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:54:05.790205    5471 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-660000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-660000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.869735167s)

                                                
                                                
-- stdout --
	* [false-660000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-660000" primary control-plane node in "false-660000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-660000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:54:08.352249    5592 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:54:08.352410    5592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:54:08.352413    5592 out.go:358] Setting ErrFile to fd 2...
	I1025 18:54:08.352423    5592 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:54:08.352573    5592 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:54:08.353801    5592 out.go:352] Setting JSON to false
	I1025 18:54:08.372057    5592 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5018,"bootTime":1729902630,"procs":562,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:54:08.372142    5592 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:54:08.378176    5592 out.go:177] * [false-660000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:54:08.386059    5592 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:54:08.386149    5592 notify.go:220] Checking for updates...
	I1025 18:54:08.394015    5592 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:54:08.397032    5592 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:54:08.401067    5592 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:54:08.404084    5592 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:54:08.412029    5592 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:54:08.415383    5592 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:54:08.415450    5592 config.go:182] Loaded profile config "stopped-upgrade-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:54:08.415501    5592 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:54:08.419037    5592 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:54:08.426137    5592 start.go:297] selected driver: qemu2
	I1025 18:54:08.426144    5592 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:54:08.426150    5592 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:54:08.428593    5592 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:54:08.432001    5592 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:54:08.435120    5592 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:54:08.435137    5592 cni.go:84] Creating CNI manager for "false"
	I1025 18:54:08.435169    5592 start.go:340] cluster config:
	{Name:false-660000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:false-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:54:08.439514    5592 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:54:08.447100    5592 out.go:177] * Starting "false-660000" primary control-plane node in "false-660000" cluster
	I1025 18:54:08.451133    5592 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:54:08.451147    5592 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:54:08.451154    5592 cache.go:56] Caching tarball of preloaded images
	I1025 18:54:08.451227    5592 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:54:08.451233    5592 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:54:08.451291    5592 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/false-660000/config.json ...
	I1025 18:54:08.451304    5592 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/false-660000/config.json: {Name:mka1911766b9b6aebc06644283bd6ed89d282ae9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:54:08.451580    5592 start.go:360] acquireMachinesLock for false-660000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:54:08.451627    5592 start.go:364] duration metric: took 41.5µs to acquireMachinesLock for "false-660000"
	I1025 18:54:08.451639    5592 start.go:93] Provisioning new machine with config: &{Name:false-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:54:08.451663    5592 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:54:08.459050    5592 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 18:54:08.473630    5592 start.go:159] libmachine.API.Create for "false-660000" (driver="qemu2")
	I1025 18:54:08.473661    5592 client.go:168] LocalClient.Create starting
	I1025 18:54:08.473747    5592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:54:08.473791    5592 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:08.473804    5592 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:08.473844    5592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:54:08.473872    5592 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:08.473878    5592 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:08.474337    5592 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:54:08.630385    5592 main.go:141] libmachine: Creating SSH key...
	I1025 18:54:08.701066    5592 main.go:141] libmachine: Creating Disk image...
	I1025 18:54:08.701073    5592 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:54:08.701267    5592 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/false-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/false-660000/disk.qcow2
	I1025 18:54:08.711128    5592 main.go:141] libmachine: STDOUT: 
	I1025 18:54:08.711152    5592 main.go:141] libmachine: STDERR: 
	I1025 18:54:08.711223    5592 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/false-660000/disk.qcow2 +20000M
	I1025 18:54:08.720372    5592 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:54:08.720390    5592 main.go:141] libmachine: STDERR: 
	I1025 18:54:08.720407    5592 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/false-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/false-660000/disk.qcow2
	I1025 18:54:08.720412    5592 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:54:08.720423    5592 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:54:08.720460    5592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/false-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/false-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/false-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:17:fb:31:9e:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/false-660000/disk.qcow2
	I1025 18:54:08.722454    5592 main.go:141] libmachine: STDOUT: 
	I1025 18:54:08.722469    5592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:54:08.722492    5592 client.go:171] duration metric: took 248.819417ms to LocalClient.Create
	I1025 18:54:10.724685    5592 start.go:128] duration metric: took 2.272933333s to createHost
	I1025 18:54:10.724723    5592 start.go:83] releasing machines lock for "false-660000", held for 2.273035834s
	W1025 18:54:10.724784    5592 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:54:10.734168    5592 out.go:177] * Deleting "false-660000" in qemu2 ...
	W1025 18:54:10.755045    5592 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:54:10.755059    5592 start.go:729] Will try again in 5 seconds ...
	I1025 18:54:15.757383    5592 start.go:360] acquireMachinesLock for false-660000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:54:15.757729    5592 start.go:364] duration metric: took 293.375µs to acquireMachinesLock for "false-660000"
	I1025 18:54:15.757803    5592 start.go:93] Provisioning new machine with config: &{Name:false-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:54:15.758040    5592 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:54:15.768464    5592 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 18:54:15.802284    5592 start.go:159] libmachine.API.Create for "false-660000" (driver="qemu2")
	I1025 18:54:15.802331    5592 client.go:168] LocalClient.Create starting
	I1025 18:54:15.802467    5592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:54:15.802547    5592 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:15.802563    5592 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:15.802625    5592 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:54:15.802674    5592 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:15.802694    5592 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:15.803144    5592 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:54:15.966666    5592 main.go:141] libmachine: Creating SSH key...
	I1025 18:54:16.128567    5592 main.go:141] libmachine: Creating Disk image...
	I1025 18:54:16.128584    5592 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:54:16.128813    5592 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/false-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/false-660000/disk.qcow2
	I1025 18:54:16.139172    5592 main.go:141] libmachine: STDOUT: 
	I1025 18:54:16.139189    5592 main.go:141] libmachine: STDERR: 
	I1025 18:54:16.139244    5592 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/false-660000/disk.qcow2 +20000M
	I1025 18:54:16.147764    5592 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:54:16.147779    5592 main.go:141] libmachine: STDERR: 
	I1025 18:54:16.147794    5592 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/false-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/false-660000/disk.qcow2
	I1025 18:54:16.147798    5592 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:54:16.147809    5592 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:54:16.147835    5592 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/false-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/false-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/false-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:ec:6e:87:9e:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/false-660000/disk.qcow2
	I1025 18:54:16.149732    5592 main.go:141] libmachine: STDOUT: 
	I1025 18:54:16.149746    5592 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:54:16.149759    5592 client.go:171] duration metric: took 347.414625ms to LocalClient.Create
	I1025 18:54:18.152024    5592 start.go:128] duration metric: took 2.393884583s to createHost
	I1025 18:54:18.152111    5592 start.go:83] releasing machines lock for "false-660000", held for 2.394309959s
	W1025 18:54:18.152530    5592 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:54:18.162352    5592 out.go:201] 
	W1025 18:54:18.166324    5592 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:54:18.166390    5592 out.go:270] * 
	* 
	W1025 18:54:18.169781    5592 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:54:18.178344    5592 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-660000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-660000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.869584375s)

                                                
                                                
-- stdout --
	* [enable-default-cni-660000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-660000" primary control-plane node in "enable-default-cni-660000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-660000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:54:20.580701    5706 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:54:20.580882    5706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:54:20.580890    5706 out.go:358] Setting ErrFile to fd 2...
	I1025 18:54:20.580892    5706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:54:20.581025    5706 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:54:20.582714    5706 out.go:352] Setting JSON to false
	I1025 18:54:20.602855    5706 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5030,"bootTime":1729902630,"procs":561,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:54:20.602932    5706 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:54:20.607514    5706 out.go:177] * [enable-default-cni-660000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:54:20.615363    5706 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:54:20.615433    5706 notify.go:220] Checking for updates...
	I1025 18:54:20.620851    5706 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:54:20.624390    5706 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:54:20.627396    5706 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:54:20.630410    5706 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:54:20.633451    5706 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:54:20.636834    5706 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:54:20.636907    5706 config.go:182] Loaded profile config "stopped-upgrade-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:54:20.636944    5706 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:54:20.641360    5706 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:54:20.648379    5706 start.go:297] selected driver: qemu2
	I1025 18:54:20.648386    5706 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:54:20.648393    5706 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:54:20.650959    5706 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:54:20.654358    5706 out.go:177] * Automatically selected the socket_vmnet network
	E1025 18:54:20.657421    5706 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1025 18:54:20.657445    5706 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:54:20.657463    5706 cni.go:84] Creating CNI manager for "bridge"
	I1025 18:54:20.657469    5706 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 18:54:20.657503    5706 start.go:340] cluster config:
	{Name:enable-default-cni-660000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:54:20.662054    5706 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:54:20.669296    5706 out.go:177] * Starting "enable-default-cni-660000" primary control-plane node in "enable-default-cni-660000" cluster
	I1025 18:54:20.673350    5706 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:54:20.673365    5706 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:54:20.673372    5706 cache.go:56] Caching tarball of preloaded images
	I1025 18:54:20.673440    5706 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:54:20.673445    5706 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:54:20.673498    5706 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/enable-default-cni-660000/config.json ...
	I1025 18:54:20.673508    5706 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/enable-default-cni-660000/config.json: {Name:mk3f9fde2035693677f9b183404a6155a0b10069 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:54:20.673749    5706 start.go:360] acquireMachinesLock for enable-default-cni-660000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:54:20.673796    5706 start.go:364] duration metric: took 39.584µs to acquireMachinesLock for "enable-default-cni-660000"
	I1025 18:54:20.673808    5706 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:54:20.673834    5706 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:54:20.677344    5706 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 18:54:20.693756    5706 start.go:159] libmachine.API.Create for "enable-default-cni-660000" (driver="qemu2")
	I1025 18:54:20.693786    5706 client.go:168] LocalClient.Create starting
	I1025 18:54:20.693864    5706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:54:20.693903    5706 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:20.693917    5706 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:20.693958    5706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:54:20.693991    5706 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:20.693997    5706 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:20.694386    5706 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:54:20.849669    5706 main.go:141] libmachine: Creating SSH key...
	I1025 18:54:20.902562    5706 main.go:141] libmachine: Creating Disk image...
	I1025 18:54:20.902568    5706 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:54:20.902770    5706 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/enable-default-cni-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/enable-default-cni-660000/disk.qcow2
	I1025 18:54:20.912789    5706 main.go:141] libmachine: STDOUT: 
	I1025 18:54:20.912811    5706 main.go:141] libmachine: STDERR: 
	I1025 18:54:20.912871    5706 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/enable-default-cni-660000/disk.qcow2 +20000M
	I1025 18:54:20.921602    5706 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:54:20.921620    5706 main.go:141] libmachine: STDERR: 
	I1025 18:54:20.921637    5706 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/enable-default-cni-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/enable-default-cni-660000/disk.qcow2
	I1025 18:54:20.921643    5706 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:54:20.921654    5706 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:54:20.921685    5706 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/enable-default-cni-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/enable-default-cni-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/enable-default-cni-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:93:f0:bc:23:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/enable-default-cni-660000/disk.qcow2
	I1025 18:54:20.923517    5706 main.go:141] libmachine: STDOUT: 
	I1025 18:54:20.923531    5706 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:54:20.923551    5706 client.go:171] duration metric: took 229.753417ms to LocalClient.Create
	I1025 18:54:22.926056    5706 start.go:128] duration metric: took 2.252055333s to createHost
	I1025 18:54:22.926203    5706 start.go:83] releasing machines lock for "enable-default-cni-660000", held for 2.252345125s
	W1025 18:54:22.926260    5706 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:54:22.937616    5706 out.go:177] * Deleting "enable-default-cni-660000" in qemu2 ...
	W1025 18:54:22.965330    5706 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:54:22.965366    5706 start.go:729] Will try again in 5 seconds ...
	I1025 18:54:27.967708    5706 start.go:360] acquireMachinesLock for enable-default-cni-660000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:54:27.968326    5706 start.go:364] duration metric: took 539.25µs to acquireMachinesLock for "enable-default-cni-660000"
	I1025 18:54:27.968398    5706 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:54:27.968747    5706 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:54:27.979460    5706 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 18:54:28.022884    5706 start.go:159] libmachine.API.Create for "enable-default-cni-660000" (driver="qemu2")
	I1025 18:54:28.022929    5706 client.go:168] LocalClient.Create starting
	I1025 18:54:28.023066    5706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:54:28.023166    5706 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:28.023185    5706 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:28.023244    5706 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:54:28.023301    5706 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:28.023315    5706 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:28.023968    5706 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:54:28.186501    5706 main.go:141] libmachine: Creating SSH key...
	I1025 18:54:28.361614    5706 main.go:141] libmachine: Creating Disk image...
	I1025 18:54:28.361624    5706 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:54:28.361854    5706 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/enable-default-cni-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/enable-default-cni-660000/disk.qcow2
	I1025 18:54:28.372171    5706 main.go:141] libmachine: STDOUT: 
	I1025 18:54:28.372188    5706 main.go:141] libmachine: STDERR: 
	I1025 18:54:28.372244    5706 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/enable-default-cni-660000/disk.qcow2 +20000M
	I1025 18:54:28.380728    5706 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:54:28.380742    5706 main.go:141] libmachine: STDERR: 
	I1025 18:54:28.380752    5706 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/enable-default-cni-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/enable-default-cni-660000/disk.qcow2
	I1025 18:54:28.380764    5706 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:54:28.380774    5706 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:54:28.380809    5706 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/enable-default-cni-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/enable-default-cni-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/enable-default-cni-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:75:d2:b5:0a:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/enable-default-cni-660000/disk.qcow2
	I1025 18:54:28.382665    5706 main.go:141] libmachine: STDOUT: 
	I1025 18:54:28.382680    5706 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:54:28.382690    5706 client.go:171] duration metric: took 359.748125ms to LocalClient.Create
	I1025 18:54:30.384937    5706 start.go:128] duration metric: took 2.416095292s to createHost
	I1025 18:54:30.385063    5706 start.go:83] releasing machines lock for "enable-default-cni-660000", held for 2.416652375s
	W1025 18:54:30.385428    5706 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:54:30.391977    5706 out.go:201] 
	W1025 18:54:30.397084    5706 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:54:30.397113    5706 out.go:270] * 
	* 
	W1025 18:54:30.399063    5706 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:54:30.407909    5706 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-660000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-660000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.835966792s)

                                                
                                                
-- stdout --
	* [flannel-660000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-660000" primary control-plane node in "flannel-660000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-660000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:54:32.774579    5820 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:54:32.774759    5820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:54:32.774763    5820 out.go:358] Setting ErrFile to fd 2...
	I1025 18:54:32.774765    5820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:54:32.774914    5820 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:54:32.776087    5820 out.go:352] Setting JSON to false
	I1025 18:54:32.794391    5820 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5042,"bootTime":1729902630,"procs":564,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:54:32.794470    5820 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:54:32.800495    5820 out.go:177] * [flannel-660000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:54:32.807476    5820 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:54:32.807516    5820 notify.go:220] Checking for updates...
	I1025 18:54:32.814459    5820 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:54:32.817467    5820 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:54:32.821496    5820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:54:32.824465    5820 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:54:32.827517    5820 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:54:32.830907    5820 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:54:32.830981    5820 config.go:182] Loaded profile config "stopped-upgrade-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:54:32.831041    5820 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:54:32.835466    5820 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:54:32.842463    5820 start.go:297] selected driver: qemu2
	I1025 18:54:32.842470    5820 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:54:32.842478    5820 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:54:32.845041    5820 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:54:32.848405    5820 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:54:32.851487    5820 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:54:32.851505    5820 cni.go:84] Creating CNI manager for "flannel"
	I1025 18:54:32.851515    5820 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1025 18:54:32.851552    5820 start.go:340] cluster config:
	{Name:flannel-660000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:flannel-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:54:32.856359    5820 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:54:32.864446    5820 out.go:177] * Starting "flannel-660000" primary control-plane node in "flannel-660000" cluster
	I1025 18:54:32.868407    5820 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:54:32.868426    5820 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:54:32.868437    5820 cache.go:56] Caching tarball of preloaded images
	I1025 18:54:32.868509    5820 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:54:32.868514    5820 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:54:32.868566    5820 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/flannel-660000/config.json ...
	I1025 18:54:32.868576    5820 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/flannel-660000/config.json: {Name:mkf084d10d7b367e7491743b27c3c3f2bc8fadee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:54:32.868935    5820 start.go:360] acquireMachinesLock for flannel-660000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:54:32.868994    5820 start.go:364] duration metric: took 53.417µs to acquireMachinesLock for "flannel-660000"
	I1025 18:54:32.869006    5820 start.go:93] Provisioning new machine with config: &{Name:flannel-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:54:32.869036    5820 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:54:32.876440    5820 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 18:54:32.893171    5820 start.go:159] libmachine.API.Create for "flannel-660000" (driver="qemu2")
	I1025 18:54:32.893201    5820 client.go:168] LocalClient.Create starting
	I1025 18:54:32.893272    5820 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:54:32.893310    5820 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:32.893318    5820 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:32.893355    5820 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:54:32.893384    5820 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:32.893393    5820 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:32.893804    5820 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:54:33.060120    5820 main.go:141] libmachine: Creating SSH key...
	I1025 18:54:33.175822    5820 main.go:141] libmachine: Creating Disk image...
	I1025 18:54:33.175832    5820 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:54:33.176034    5820 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/flannel-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/flannel-660000/disk.qcow2
	I1025 18:54:33.186549    5820 main.go:141] libmachine: STDOUT: 
	I1025 18:54:33.186572    5820 main.go:141] libmachine: STDERR: 
	I1025 18:54:33.186629    5820 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/flannel-660000/disk.qcow2 +20000M
	I1025 18:54:33.195489    5820 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:54:33.195507    5820 main.go:141] libmachine: STDERR: 
	I1025 18:54:33.195524    5820 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/flannel-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/flannel-660000/disk.qcow2
	I1025 18:54:33.195530    5820 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:54:33.195543    5820 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:54:33.195581    5820 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/flannel-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/flannel-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/flannel-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:56:a9:83:6d:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/flannel-660000/disk.qcow2
	I1025 18:54:33.197454    5820 main.go:141] libmachine: STDOUT: 
	I1025 18:54:33.197468    5820 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:54:33.197491    5820 client.go:171] duration metric: took 304.2755ms to LocalClient.Create
	I1025 18:54:35.199629    5820 start.go:128] duration metric: took 2.330528208s to createHost
	I1025 18:54:35.199667    5820 start.go:83] releasing machines lock for "flannel-660000", held for 2.330613083s
	W1025 18:54:35.199683    5820 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:54:35.203294    5820 out.go:177] * Deleting "flannel-660000" in qemu2 ...
	W1025 18:54:35.217614    5820 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:54:35.217623    5820 start.go:729] Will try again in 5 seconds ...
	I1025 18:54:40.220033    5820 start.go:360] acquireMachinesLock for flannel-660000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:54:40.220565    5820 start.go:364] duration metric: took 432.583µs to acquireMachinesLock for "flannel-660000"
	I1025 18:54:40.220714    5820 start.go:93] Provisioning new machine with config: &{Name:flannel-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:54:40.221039    5820 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:54:40.231792    5820 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 18:54:40.279935    5820 start.go:159] libmachine.API.Create for "flannel-660000" (driver="qemu2")
	I1025 18:54:40.279994    5820 client.go:168] LocalClient.Create starting
	I1025 18:54:40.280149    5820 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:54:40.280239    5820 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:40.280253    5820 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:40.280319    5820 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:54:40.280376    5820 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:40.280387    5820 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:40.281066    5820 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:54:40.451215    5820 main.go:141] libmachine: Creating SSH key...
	I1025 18:54:40.511581    5820 main.go:141] libmachine: Creating Disk image...
	I1025 18:54:40.511587    5820 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:54:40.511794    5820 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/flannel-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/flannel-660000/disk.qcow2
	I1025 18:54:40.522089    5820 main.go:141] libmachine: STDOUT: 
	I1025 18:54:40.522115    5820 main.go:141] libmachine: STDERR: 
	I1025 18:54:40.522172    5820 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/flannel-660000/disk.qcow2 +20000M
	I1025 18:54:40.530685    5820 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:54:40.530701    5820 main.go:141] libmachine: STDERR: 
	I1025 18:54:40.530712    5820 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/flannel-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/flannel-660000/disk.qcow2
	I1025 18:54:40.530717    5820 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:54:40.530726    5820 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:54:40.530752    5820 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/flannel-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/flannel-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/flannel-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:47:68:6e:71:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/flannel-660000/disk.qcow2
	I1025 18:54:40.532577    5820 main.go:141] libmachine: STDOUT: 
	I1025 18:54:40.532592    5820 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:54:40.532605    5820 client.go:171] duration metric: took 252.600584ms to LocalClient.Create
	I1025 18:54:42.534861    5820 start.go:128] duration metric: took 2.313724041s to createHost
	I1025 18:54:42.534954    5820 start.go:83] releasing machines lock for "flannel-660000", held for 2.314309834s
	W1025 18:54:42.535330    5820 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:54:42.550094    5820 out.go:201] 
	W1025 18:54:42.553074    5820 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:54:42.553139    5820 out.go:270] * 
	* 
	W1025 18:54:42.555728    5820 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:54:42.565029    5820 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-660000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-660000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.963994s)

                                                
                                                
-- stdout --
	* [bridge-660000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-660000" primary control-plane node in "bridge-660000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-660000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:54:45.189452    5940 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:54:45.189614    5940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:54:45.189617    5940 out.go:358] Setting ErrFile to fd 2...
	I1025 18:54:45.189619    5940 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:54:45.189754    5940 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:54:45.190900    5940 out.go:352] Setting JSON to false
	I1025 18:54:45.209334    5940 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5055,"bootTime":1729902630,"procs":560,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:54:45.209403    5940 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:54:45.215824    5940 out.go:177] * [bridge-660000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:54:45.223815    5940 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:54:45.223882    5940 notify.go:220] Checking for updates...
	I1025 18:54:45.230757    5940 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:54:45.233762    5940 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:54:45.236728    5940 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:54:45.239802    5940 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:54:45.242814    5940 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:54:45.246155    5940 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:54:45.246225    5940 config.go:182] Loaded profile config "stopped-upgrade-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:54:45.246269    5940 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:54:45.250792    5940 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:54:45.257728    5940 start.go:297] selected driver: qemu2
	I1025 18:54:45.257734    5940 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:54:45.257741    5940 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:54:45.260178    5940 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:54:45.262752    5940 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:54:45.265828    5940 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:54:45.265844    5940 cni.go:84] Creating CNI manager for "bridge"
	I1025 18:54:45.265848    5940 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 18:54:45.265880    5940 start.go:340] cluster config:
	{Name:bridge-660000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:bridge-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:54:45.270035    5940 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:54:45.278735    5940 out.go:177] * Starting "bridge-660000" primary control-plane node in "bridge-660000" cluster
	I1025 18:54:45.281683    5940 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:54:45.281701    5940 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:54:45.281716    5940 cache.go:56] Caching tarball of preloaded images
	I1025 18:54:45.281810    5940 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:54:45.281816    5940 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:54:45.281873    5940 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/bridge-660000/config.json ...
	I1025 18:54:45.281884    5940 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/bridge-660000/config.json: {Name:mkf89be8789c13d3c329c1686b78af9907777561 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:54:45.282123    5940 start.go:360] acquireMachinesLock for bridge-660000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:54:45.282168    5940 start.go:364] duration metric: took 39µs to acquireMachinesLock for "bridge-660000"
	I1025 18:54:45.282179    5940 start.go:93] Provisioning new machine with config: &{Name:bridge-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:54:45.282208    5940 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:54:45.289760    5940 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 18:54:45.305142    5940 start.go:159] libmachine.API.Create for "bridge-660000" (driver="qemu2")
	I1025 18:54:45.305177    5940 client.go:168] LocalClient.Create starting
	I1025 18:54:45.305244    5940 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:54:45.305281    5940 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:45.305290    5940 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:45.305331    5940 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:54:45.305363    5940 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:45.305370    5940 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:45.305783    5940 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:54:45.460443    5940 main.go:141] libmachine: Creating SSH key...
	I1025 18:54:45.546350    5940 main.go:141] libmachine: Creating Disk image...
	I1025 18:54:45.546357    5940 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:54:45.546555    5940 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/bridge-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/bridge-660000/disk.qcow2
	I1025 18:54:45.556789    5940 main.go:141] libmachine: STDOUT: 
	I1025 18:54:45.556808    5940 main.go:141] libmachine: STDERR: 
	I1025 18:54:45.556869    5940 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/bridge-660000/disk.qcow2 +20000M
	I1025 18:54:45.565349    5940 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:54:45.565369    5940 main.go:141] libmachine: STDERR: 
	I1025 18:54:45.565383    5940 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/bridge-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/bridge-660000/disk.qcow2
	I1025 18:54:45.565389    5940 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:54:45.565397    5940 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:54:45.565429    5940 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/bridge-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/bridge-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/bridge-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:0c:c4:8b:71:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/bridge-660000/disk.qcow2
	I1025 18:54:45.567275    5940 main.go:141] libmachine: STDOUT: 
	I1025 18:54:45.567290    5940 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:54:45.567308    5940 client.go:171] duration metric: took 262.118625ms to LocalClient.Create
	I1025 18:54:47.569566    5940 start.go:128] duration metric: took 2.287271792s to createHost
	I1025 18:54:47.569685    5940 start.go:83] releasing machines lock for "bridge-660000", held for 2.287453834s
	W1025 18:54:47.569751    5940 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:54:47.585105    5940 out.go:177] * Deleting "bridge-660000" in qemu2 ...
	W1025 18:54:47.610418    5940 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:54:47.610451    5940 start.go:729] Will try again in 5 seconds ...
	I1025 18:54:52.612853    5940 start.go:360] acquireMachinesLock for bridge-660000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:54:52.613521    5940 start.go:364] duration metric: took 545.5µs to acquireMachinesLock for "bridge-660000"
	I1025 18:54:52.613605    5940 start.go:93] Provisioning new machine with config: &{Name:bridge-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:54:52.613957    5940 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:54:52.620812    5940 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 18:54:52.671448    5940 start.go:159] libmachine.API.Create for "bridge-660000" (driver="qemu2")
	I1025 18:54:52.671499    5940 client.go:168] LocalClient.Create starting
	I1025 18:54:52.671629    5940 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:54:52.671715    5940 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:52.671732    5940 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:52.671806    5940 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:54:52.671868    5940 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:52.671882    5940 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:52.672475    5940 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:54:52.838563    5940 main.go:141] libmachine: Creating SSH key...
	I1025 18:54:53.059661    5940 main.go:141] libmachine: Creating Disk image...
	I1025 18:54:53.059676    5940 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:54:53.059924    5940 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/bridge-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/bridge-660000/disk.qcow2
	I1025 18:54:53.070599    5940 main.go:141] libmachine: STDOUT: 
	I1025 18:54:53.070623    5940 main.go:141] libmachine: STDERR: 
	I1025 18:54:53.070695    5940 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/bridge-660000/disk.qcow2 +20000M
	I1025 18:54:53.079238    5940 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:54:53.079263    5940 main.go:141] libmachine: STDERR: 
	I1025 18:54:53.079274    5940 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/bridge-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/bridge-660000/disk.qcow2
	I1025 18:54:53.079280    5940 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:54:53.079287    5940 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:54:53.079319    5940 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/bridge-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/bridge-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/bridge-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f4:56:2b:63:1f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/bridge-660000/disk.qcow2
	I1025 18:54:53.081179    5940 main.go:141] libmachine: STDOUT: 
	I1025 18:54:53.081198    5940 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:54:53.081211    5940 client.go:171] duration metric: took 409.69625ms to LocalClient.Create
	I1025 18:54:55.083459    5940 start.go:128] duration metric: took 2.469391042s to createHost
	I1025 18:54:55.083535    5940 start.go:83] releasing machines lock for "bridge-660000", held for 2.469932916s
	W1025 18:54:55.083887    5940 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:54:55.095433    5940 out.go:201] 
	W1025 18:54:55.099407    5940 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:54:55.099431    5940 out.go:270] * 
	* 
	W1025 18:54:55.101043    5940 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:54:55.111418    5940 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-660000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-660000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.76222975s)

                                                
                                                
-- stdout --
	* [kubenet-660000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-660000" primary control-plane node in "kubenet-660000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-660000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:54:57.451281    6049 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:54:57.451458    6049 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:54:57.451466    6049 out.go:358] Setting ErrFile to fd 2...
	I1025 18:54:57.451469    6049 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:54:57.451602    6049 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:54:57.453079    6049 out.go:352] Setting JSON to false
	I1025 18:54:57.471207    6049 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5067,"bootTime":1729902630,"procs":558,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:54:57.471284    6049 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:54:57.477415    6049 out.go:177] * [kubenet-660000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:54:57.485378    6049 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:54:57.485430    6049 notify.go:220] Checking for updates...
	I1025 18:54:57.492389    6049 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:54:57.495368    6049 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:54:57.498366    6049 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:54:57.501371    6049 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:54:57.504362    6049 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:54:57.507755    6049 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:54:57.507827    6049 config.go:182] Loaded profile config "stopped-upgrade-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:54:57.507880    6049 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:54:57.512386    6049 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:54:57.519341    6049 start.go:297] selected driver: qemu2
	I1025 18:54:57.519349    6049 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:54:57.519355    6049 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:54:57.521780    6049 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:54:57.525346    6049 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:54:57.528404    6049 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:54:57.528423    6049 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1025 18:54:57.528460    6049 start.go:340] cluster config:
	{Name:kubenet-660000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubenet-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:54:57.533209    6049 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:54:57.541268    6049 out.go:177] * Starting "kubenet-660000" primary control-plane node in "kubenet-660000" cluster
	I1025 18:54:57.545347    6049 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:54:57.545369    6049 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:54:57.545381    6049 cache.go:56] Caching tarball of preloaded images
	I1025 18:54:57.545474    6049 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:54:57.545480    6049 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:54:57.545547    6049 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/kubenet-660000/config.json ...
	I1025 18:54:57.545559    6049 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/kubenet-660000/config.json: {Name:mk86ea3366845fddd855705e80e322494e9de558 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:54:57.545937    6049 start.go:360] acquireMachinesLock for kubenet-660000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:54:57.545988    6049 start.go:364] duration metric: took 45.167µs to acquireMachinesLock for "kubenet-660000"
	I1025 18:54:57.546001    6049 start.go:93] Provisioning new machine with config: &{Name:kubenet-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:54:57.546028    6049 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:54:57.553354    6049 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 18:54:57.570726    6049 start.go:159] libmachine.API.Create for "kubenet-660000" (driver="qemu2")
	I1025 18:54:57.570754    6049 client.go:168] LocalClient.Create starting
	I1025 18:54:57.570837    6049 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:54:57.570875    6049 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:57.570891    6049 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:57.570932    6049 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:54:57.570963    6049 main.go:141] libmachine: Decoding PEM data...
	I1025 18:54:57.570975    6049 main.go:141] libmachine: Parsing certificate...
	I1025 18:54:57.571428    6049 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:54:57.726442    6049 main.go:141] libmachine: Creating SSH key...
	I1025 18:54:57.776616    6049 main.go:141] libmachine: Creating Disk image...
	I1025 18:54:57.776624    6049 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:54:57.776821    6049 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubenet-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubenet-660000/disk.qcow2
	I1025 18:54:57.786741    6049 main.go:141] libmachine: STDOUT: 
	I1025 18:54:57.786763    6049 main.go:141] libmachine: STDERR: 
	I1025 18:54:57.786819    6049 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubenet-660000/disk.qcow2 +20000M
	I1025 18:54:57.795424    6049 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:54:57.795441    6049 main.go:141] libmachine: STDERR: 
	I1025 18:54:57.795456    6049 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubenet-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubenet-660000/disk.qcow2
	I1025 18:54:57.795463    6049 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:54:57.795476    6049 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:54:57.795503    6049 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubenet-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubenet-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubenet-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:bc:2a:cc:9b:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubenet-660000/disk.qcow2
	I1025 18:54:57.797362    6049 main.go:141] libmachine: STDOUT: 
	I1025 18:54:57.797384    6049 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:54:57.797406    6049 client.go:171] duration metric: took 226.640541ms to LocalClient.Create
	I1025 18:54:59.799630    6049 start.go:128] duration metric: took 2.253527792s to createHost
	I1025 18:54:59.799695    6049 start.go:83] releasing machines lock for "kubenet-660000", held for 2.25364575s
	W1025 18:54:59.799766    6049 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:54:59.813369    6049 out.go:177] * Deleting "kubenet-660000" in qemu2 ...
	W1025 18:54:59.837194    6049 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:54:59.837229    6049 start.go:729] Will try again in 5 seconds ...
	I1025 18:55:04.839231    6049 start.go:360] acquireMachinesLock for kubenet-660000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:55:04.839824    6049 start.go:364] duration metric: took 471.25µs to acquireMachinesLock for "kubenet-660000"
	I1025 18:55:04.839987    6049 start.go:93] Provisioning new machine with config: &{Name:kubenet-660000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:55:04.840299    6049 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:55:04.845982    6049 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1025 18:55:04.884018    6049 start.go:159] libmachine.API.Create for "kubenet-660000" (driver="qemu2")
	I1025 18:55:04.884072    6049 client.go:168] LocalClient.Create starting
	I1025 18:55:04.884204    6049 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:55:04.884291    6049 main.go:141] libmachine: Decoding PEM data...
	I1025 18:55:04.884309    6049 main.go:141] libmachine: Parsing certificate...
	I1025 18:55:04.884376    6049 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:55:04.884429    6049 main.go:141] libmachine: Decoding PEM data...
	I1025 18:55:04.884440    6049 main.go:141] libmachine: Parsing certificate...
	I1025 18:55:04.884966    6049 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:55:05.047799    6049 main.go:141] libmachine: Creating SSH key...
	I1025 18:55:05.106427    6049 main.go:141] libmachine: Creating Disk image...
	I1025 18:55:05.106435    6049 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:55:05.106629    6049 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubenet-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubenet-660000/disk.qcow2
	I1025 18:55:05.116901    6049 main.go:141] libmachine: STDOUT: 
	I1025 18:55:05.116924    6049 main.go:141] libmachine: STDERR: 
	I1025 18:55:05.116984    6049 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubenet-660000/disk.qcow2 +20000M
	I1025 18:55:05.125846    6049 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:55:05.125861    6049 main.go:141] libmachine: STDERR: 
	I1025 18:55:05.125874    6049 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubenet-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubenet-660000/disk.qcow2
	I1025 18:55:05.125879    6049 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:55:05.125887    6049 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:55:05.125924    6049 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubenet-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubenet-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubenet-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:f6:3c:1d:89:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/kubenet-660000/disk.qcow2
	I1025 18:55:05.127834    6049 main.go:141] libmachine: STDOUT: 
	I1025 18:55:05.127846    6049 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:55:05.127857    6049 client.go:171] duration metric: took 243.775208ms to LocalClient.Create
	I1025 18:55:07.129966    6049 start.go:128] duration metric: took 2.28960225s to createHost
	I1025 18:55:07.129980    6049 start.go:83] releasing machines lock for "kubenet-660000", held for 2.290063417s
	W1025 18:55:07.130070    6049 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:07.141750    6049 out.go:201] 
	W1025 18:55:07.147790    6049 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:55:07.147797    6049 out.go:270] * 
	* 
	W1025 18:55:07.148298    6049 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:55:07.161765    6049 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-825000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-825000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.779411375s)

                                                
                                                
-- stdout --
	* [old-k8s-version-825000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-825000" primary control-plane node in "old-k8s-version-825000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-825000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:55:09.547366    6164 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:55:09.547538    6164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:09.547542    6164 out.go:358] Setting ErrFile to fd 2...
	I1025 18:55:09.547545    6164 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:09.547682    6164 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:55:09.548938    6164 out.go:352] Setting JSON to false
	I1025 18:55:09.567337    6164 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5079,"bootTime":1729902630,"procs":558,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:55:09.567402    6164 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:55:09.573855    6164 out.go:177] * [old-k8s-version-825000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:55:09.581824    6164 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:55:09.581854    6164 notify.go:220] Checking for updates...
	I1025 18:55:09.588787    6164 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:55:09.591846    6164 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:55:09.595724    6164 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:55:09.598896    6164 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:55:09.601786    6164 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:55:09.605145    6164 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:55:09.605225    6164 config.go:182] Loaded profile config "stopped-upgrade-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:55:09.605270    6164 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:55:09.609763    6164 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:55:09.616779    6164 start.go:297] selected driver: qemu2
	I1025 18:55:09.616786    6164 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:55:09.616794    6164 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:55:09.619328    6164 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:55:09.622822    6164 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:55:09.625895    6164 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:55:09.625923    6164 cni.go:84] Creating CNI manager for ""
	I1025 18:55:09.625944    6164 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 18:55:09.625968    6164 start.go:340] cluster config:
	{Name:old-k8s-version-825000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-825000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:55:09.630631    6164 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:09.638817    6164 out.go:177] * Starting "old-k8s-version-825000" primary control-plane node in "old-k8s-version-825000" cluster
	I1025 18:55:09.642765    6164 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1025 18:55:09.642784    6164 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1025 18:55:09.642795    6164 cache.go:56] Caching tarball of preloaded images
	I1025 18:55:09.642870    6164 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:55:09.642876    6164 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1025 18:55:09.642939    6164 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/old-k8s-version-825000/config.json ...
	I1025 18:55:09.642950    6164 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/old-k8s-version-825000/config.json: {Name:mk213481496bf2eeb29056743bab6c771255ef03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:55:09.643194    6164 start.go:360] acquireMachinesLock for old-k8s-version-825000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:55:09.643242    6164 start.go:364] duration metric: took 40.5µs to acquireMachinesLock for "old-k8s-version-825000"
	I1025 18:55:09.643254    6164 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-825000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-825000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:55:09.643293    6164 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:55:09.650774    6164 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 18:55:09.667336    6164 start.go:159] libmachine.API.Create for "old-k8s-version-825000" (driver="qemu2")
	I1025 18:55:09.667364    6164 client.go:168] LocalClient.Create starting
	I1025 18:55:09.667463    6164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:55:09.667504    6164 main.go:141] libmachine: Decoding PEM data...
	I1025 18:55:09.667516    6164 main.go:141] libmachine: Parsing certificate...
	I1025 18:55:09.667552    6164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:55:09.667581    6164 main.go:141] libmachine: Decoding PEM data...
	I1025 18:55:09.667593    6164 main.go:141] libmachine: Parsing certificate...
	I1025 18:55:09.667961    6164 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:55:09.824771    6164 main.go:141] libmachine: Creating SSH key...
	I1025 18:55:09.887560    6164 main.go:141] libmachine: Creating Disk image...
	I1025 18:55:09.887575    6164 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:55:09.887781    6164 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/disk.qcow2
	I1025 18:55:09.897908    6164 main.go:141] libmachine: STDOUT: 
	I1025 18:55:09.897928    6164 main.go:141] libmachine: STDERR: 
	I1025 18:55:09.898008    6164 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/disk.qcow2 +20000M
	I1025 18:55:09.906803    6164 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:55:09.906817    6164 main.go:141] libmachine: STDERR: 
	I1025 18:55:09.906835    6164 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/disk.qcow2
	I1025 18:55:09.906839    6164 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:55:09.906853    6164 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:55:09.906882    6164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:6b:49:58:de:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/disk.qcow2
	I1025 18:55:09.908701    6164 main.go:141] libmachine: STDOUT: 
	I1025 18:55:09.908715    6164 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:55:09.908735    6164 client.go:171] duration metric: took 241.352917ms to LocalClient.Create
	I1025 18:55:11.911009    6164 start.go:128] duration metric: took 2.267627916s to createHost
	I1025 18:55:11.911085    6164 start.go:83] releasing machines lock for "old-k8s-version-825000", held for 2.267779459s
	W1025 18:55:11.911155    6164 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:11.924521    6164 out.go:177] * Deleting "old-k8s-version-825000" in qemu2 ...
	W1025 18:55:11.951057    6164 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:11.951079    6164 start.go:729] Will try again in 5 seconds ...
	I1025 18:55:16.953303    6164 start.go:360] acquireMachinesLock for old-k8s-version-825000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:55:16.953608    6164 start.go:364] duration metric: took 266.834µs to acquireMachinesLock for "old-k8s-version-825000"
	I1025 18:55:16.953682    6164 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-825000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-825000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:55:16.953777    6164 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:55:16.963281    6164 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 18:55:16.991733    6164 start.go:159] libmachine.API.Create for "old-k8s-version-825000" (driver="qemu2")
	I1025 18:55:16.991774    6164 client.go:168] LocalClient.Create starting
	I1025 18:55:16.991882    6164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:55:16.991940    6164 main.go:141] libmachine: Decoding PEM data...
	I1025 18:55:16.991954    6164 main.go:141] libmachine: Parsing certificate...
	I1025 18:55:16.992009    6164 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:55:16.992058    6164 main.go:141] libmachine: Decoding PEM data...
	I1025 18:55:16.992067    6164 main.go:141] libmachine: Parsing certificate...
	I1025 18:55:16.992524    6164 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:55:17.152012    6164 main.go:141] libmachine: Creating SSH key...
	I1025 18:55:17.238207    6164 main.go:141] libmachine: Creating Disk image...
	I1025 18:55:17.238217    6164 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:55:17.238420    6164 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/disk.qcow2
	I1025 18:55:17.248752    6164 main.go:141] libmachine: STDOUT: 
	I1025 18:55:17.248773    6164 main.go:141] libmachine: STDERR: 
	I1025 18:55:17.248827    6164 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/disk.qcow2 +20000M
	I1025 18:55:17.257412    6164 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:55:17.257428    6164 main.go:141] libmachine: STDERR: 
	I1025 18:55:17.257439    6164 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/disk.qcow2
	I1025 18:55:17.257444    6164 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:55:17.257455    6164 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:55:17.257482    6164 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:58:db:1a:5d:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/disk.qcow2
	I1025 18:55:17.259358    6164 main.go:141] libmachine: STDOUT: 
	I1025 18:55:17.259371    6164 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:55:17.259383    6164 client.go:171] duration metric: took 267.597166ms to LocalClient.Create
	I1025 18:55:19.261527    6164 start.go:128] duration metric: took 2.307679042s to createHost
	I1025 18:55:19.261563    6164 start.go:83] releasing machines lock for "old-k8s-version-825000", held for 2.307888833s
	W1025 18:55:19.261681    6164 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-825000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-825000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:19.270914    6164 out.go:201] 
	W1025 18:55:19.276927    6164 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:55:19.276935    6164 out.go:270] * 
	* 
	W1025 18:55:19.277658    6164 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:55:19.286899    6164 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-825000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000: exit status 7 (41.207084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-825000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-825000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-825000 create -f testdata/busybox.yaml: exit status 1 (27.7755ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-825000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-825000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000: exit status 7 (33.979708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-825000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000: exit status 7 (33.321958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-825000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-825000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-825000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-825000 describe deploy/metrics-server -n kube-system: exit status 1 (27.695833ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-825000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-825000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000: exit status 7 (33.57075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-825000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-825000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-825000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.207085125s)

                                                
                                                
-- stdout --
	* [old-k8s-version-825000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-825000" primary control-plane node in "old-k8s-version-825000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-825000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-825000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:55:23.150581    6215 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:55:23.150765    6215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:23.150769    6215 out.go:358] Setting ErrFile to fd 2...
	I1025 18:55:23.150772    6215 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:23.150903    6215 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:55:23.154647    6215 out.go:352] Setting JSON to false
	I1025 18:55:23.173242    6215 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5093,"bootTime":1729902630,"procs":559,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:55:23.173311    6215 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:55:23.177464    6215 out.go:177] * [old-k8s-version-825000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:55:23.185407    6215 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:55:23.185540    6215 notify.go:220] Checking for updates...
	I1025 18:55:23.192371    6215 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:55:23.195401    6215 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:55:23.198413    6215 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:55:23.201431    6215 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:55:23.204364    6215 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:55:23.207705    6215 config.go:182] Loaded profile config "old-k8s-version-825000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1025 18:55:23.211326    6215 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1025 18:55:23.214441    6215 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:55:23.218344    6215 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 18:55:23.225394    6215 start.go:297] selected driver: qemu2
	I1025 18:55:23.225403    6215 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-825000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-825000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:55:23.225455    6215 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:55:23.228394    6215 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:55:23.228428    6215 cni.go:84] Creating CNI manager for ""
	I1025 18:55:23.228447    6215 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 18:55:23.228472    6215 start.go:340] cluster config:
	{Name:old-k8s-version-825000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-825000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:55:23.233093    6215 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:23.241419    6215 out.go:177] * Starting "old-k8s-version-825000" primary control-plane node in "old-k8s-version-825000" cluster
	I1025 18:55:23.244401    6215 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1025 18:55:23.244429    6215 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1025 18:55:23.244438    6215 cache.go:56] Caching tarball of preloaded images
	I1025 18:55:23.244536    6215 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:55:23.244542    6215 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1025 18:55:23.244596    6215 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/old-k8s-version-825000/config.json ...
	I1025 18:55:23.245007    6215 start.go:360] acquireMachinesLock for old-k8s-version-825000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:55:23.245046    6215 start.go:364] duration metric: took 30.042µs to acquireMachinesLock for "old-k8s-version-825000"
	I1025 18:55:23.245055    6215 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:55:23.245061    6215 fix.go:54] fixHost starting: 
	I1025 18:55:23.245183    6215 fix.go:112] recreateIfNeeded on old-k8s-version-825000: state=Stopped err=<nil>
	W1025 18:55:23.245191    6215 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:55:23.248396    6215 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-825000" ...
	I1025 18:55:23.256409    6215 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:55:23.256443    6215 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:58:db:1a:5d:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/disk.qcow2
	I1025 18:55:23.258747    6215 main.go:141] libmachine: STDOUT: 
	I1025 18:55:23.258775    6215 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:55:23.258808    6215 fix.go:56] duration metric: took 13.746084ms for fixHost
	I1025 18:55:23.258813    6215 start.go:83] releasing machines lock for "old-k8s-version-825000", held for 13.761792ms
	W1025 18:55:23.258820    6215 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:55:23.258862    6215 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:23.258866    6215 start.go:729] Will try again in 5 seconds ...
	I1025 18:55:28.261193    6215 start.go:360] acquireMachinesLock for old-k8s-version-825000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:55:28.261794    6215 start.go:364] duration metric: took 404.042µs to acquireMachinesLock for "old-k8s-version-825000"
	I1025 18:55:28.262021    6215 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:55:28.262044    6215 fix.go:54] fixHost starting: 
	I1025 18:55:28.262817    6215 fix.go:112] recreateIfNeeded on old-k8s-version-825000: state=Stopped err=<nil>
	W1025 18:55:28.262844    6215 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:55:28.272294    6215 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-825000" ...
	I1025 18:55:28.276294    6215 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:55:28.276538    6215 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:58:db:1a:5d:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/old-k8s-version-825000/disk.qcow2
	I1025 18:55:28.287007    6215 main.go:141] libmachine: STDOUT: 
	I1025 18:55:28.287062    6215 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:55:28.287145    6215 fix.go:56] duration metric: took 25.104584ms for fixHost
	I1025 18:55:28.287161    6215 start.go:83] releasing machines lock for "old-k8s-version-825000", held for 25.288542ms
	W1025 18:55:28.287403    6215 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-825000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-825000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:28.296291    6215 out.go:201] 
	W1025 18:55:28.300415    6215 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:55:28.300458    6215 out.go:270] * 
	* 
	W1025 18:55:28.303223    6215 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:55:28.311308    6215 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-825000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000: exit status 7 (72.45925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-825000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-825000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000: exit status 7 (35.484042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-825000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-825000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-825000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-825000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.778ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-825000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-825000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000: exit status 7 (33.031125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-825000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-825000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000: exit status 7 (33.2405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-825000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-825000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-825000 --alsologtostderr -v=1: exit status 83 (46.329041ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-825000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-825000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:55:28.609668    6238 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:55:28.610630    6238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:28.610633    6238 out.go:358] Setting ErrFile to fd 2...
	I1025 18:55:28.610636    6238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:28.610786    6238 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:55:28.611005    6238 out.go:352] Setting JSON to false
	I1025 18:55:28.611013    6238 mustload.go:65] Loading cluster: old-k8s-version-825000
	I1025 18:55:28.611246    6238 config.go:182] Loaded profile config "old-k8s-version-825000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1025 18:55:28.615981    6238 out.go:177] * The control-plane node old-k8s-version-825000 host is not running: state=Stopped
	I1025 18:55:28.618960    6238 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-825000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-825000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000: exit status 7 (33.840292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-825000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000: exit status 7 (33.7665ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-825000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-188000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-188000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.843879541s)

                                                
                                                
-- stdout --
	* [no-preload-188000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-188000" primary control-plane node in "no-preload-188000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-188000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:55:28.953730    6255 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:55:28.953909    6255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:28.953916    6255 out.go:358] Setting ErrFile to fd 2...
	I1025 18:55:28.953918    6255 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:28.954099    6255 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:55:28.955364    6255 out.go:352] Setting JSON to false
	I1025 18:55:28.973720    6255 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5098,"bootTime":1729902630,"procs":561,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:55:28.973801    6255 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:55:28.978029    6255 out.go:177] * [no-preload-188000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:55:28.985139    6255 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:55:28.985214    6255 notify.go:220] Checking for updates...
	I1025 18:55:28.992044    6255 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:55:28.995100    6255 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:55:28.998071    6255 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:55:29.001096    6255 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:55:29.004131    6255 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:55:29.007475    6255 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:55:29.007542    6255 config.go:182] Loaded profile config "stopped-upgrade-473000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1025 18:55:29.007597    6255 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:55:29.012056    6255 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:55:29.019017    6255 start.go:297] selected driver: qemu2
	I1025 18:55:29.019023    6255 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:55:29.019029    6255 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:55:29.021540    6255 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:55:29.026086    6255 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:55:29.029125    6255 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:55:29.029148    6255 cni.go:84] Creating CNI manager for ""
	I1025 18:55:29.029173    6255 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:55:29.029178    6255 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 18:55:29.029213    6255 start.go:340] cluster config:
	{Name:no-preload-188000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-188000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:55:29.034044    6255 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:29.041007    6255 out.go:177] * Starting "no-preload-188000" primary control-plane node in "no-preload-188000" cluster
	I1025 18:55:29.045015    6255 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:55:29.045076    6255 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/no-preload-188000/config.json ...
	I1025 18:55:29.045092    6255 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/no-preload-188000/config.json: {Name:mk45f5b28622ceab316d7a1c10d635c363e3f7e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:55:29.045098    6255 cache.go:107] acquiring lock: {Name:mk602aca643a1423de67ff61131bfdc38a2c1535 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:29.045100    6255 cache.go:107] acquiring lock: {Name:mk3749aa17cfed9cec0374ffa4b00d003145c15a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:29.045113    6255 cache.go:107] acquiring lock: {Name:mk72a624cbcb3931b5b912dcdce1c52b13dd00e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:29.045200    6255 cache.go:115] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 18:55:29.045209    6255 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 113µs
	I1025 18:55:29.045215    6255 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 18:55:29.045221    6255 cache.go:107] acquiring lock: {Name:mkbd0fbeee336d3a2e0da319a5a45aabf30bebf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:29.045281    6255 cache.go:107] acquiring lock: {Name:mk35762609cbddade8fb31dae5aa57e82a6d5d0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:29.045319    6255 cache.go:107] acquiring lock: {Name:mke9d888d896484e1e13d05374e77109c132a85a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:29.045310    6255 cache.go:107] acquiring lock: {Name:mka9ceed83d14fc136c80a4795f3080b0c390d71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:29.045306    6255 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1025 18:55:29.045387    6255 cache.go:107] acquiring lock: {Name:mk25e06b5d7d4fe07aea2503754da571415c828d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:29.045316    6255 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1025 18:55:29.045496    6255 start.go:360] acquireMachinesLock for no-preload-188000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:55:29.045497    6255 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1025 18:55:29.045614    6255 start.go:364] duration metric: took 110.417µs to acquireMachinesLock for "no-preload-188000"
	I1025 18:55:29.045635    6255 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1025 18:55:29.045655    6255 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1025 18:55:29.045661    6255 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1025 18:55:29.045627    6255 start.go:93] Provisioning new machine with config: &{Name:no-preload-188000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-188000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:55:29.045671    6255 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:55:29.045784    6255 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1025 18:55:29.050024    6255 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 18:55:29.054126    6255 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1025 18:55:29.054131    6255 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1025 18:55:29.054688    6255 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1025 18:55:29.054986    6255 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1025 18:55:29.055131    6255 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1025 18:55:29.055208    6255 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1025 18:55:29.055463    6255 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1025 18:55:29.065366    6255 start.go:159] libmachine.API.Create for "no-preload-188000" (driver="qemu2")
	I1025 18:55:29.065389    6255 client.go:168] LocalClient.Create starting
	I1025 18:55:29.065480    6255 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:55:29.065518    6255 main.go:141] libmachine: Decoding PEM data...
	I1025 18:55:29.065526    6255 main.go:141] libmachine: Parsing certificate...
	I1025 18:55:29.065566    6255 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:55:29.065598    6255 main.go:141] libmachine: Decoding PEM data...
	I1025 18:55:29.065608    6255 main.go:141] libmachine: Parsing certificate...
	I1025 18:55:29.065962    6255 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:55:29.227977    6255 main.go:141] libmachine: Creating SSH key...
	I1025 18:55:29.262163    6255 main.go:141] libmachine: Creating Disk image...
	I1025 18:55:29.262182    6255 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:55:29.262408    6255 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/disk.qcow2
	I1025 18:55:29.272729    6255 main.go:141] libmachine: STDOUT: 
	I1025 18:55:29.272755    6255 main.go:141] libmachine: STDERR: 
	I1025 18:55:29.272831    6255 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/disk.qcow2 +20000M
	I1025 18:55:29.281652    6255 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:55:29.281667    6255 main.go:141] libmachine: STDERR: 
	I1025 18:55:29.281682    6255 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/disk.qcow2
	I1025 18:55:29.281685    6255 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:55:29.281699    6255 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:55:29.281723    6255 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:0c:73:da:ab:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/disk.qcow2
	I1025 18:55:29.283784    6255 main.go:141] libmachine: STDOUT: 
	I1025 18:55:29.283800    6255 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:55:29.283826    6255 client.go:171] duration metric: took 218.426083ms to LocalClient.Create
	I1025 18:55:29.483461    6255 cache.go:162] opening:  /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2
	I1025 18:55:29.484478    6255 cache.go:162] opening:  /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1025 18:55:29.550405    6255 cache.go:162] opening:  /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1025 18:55:29.595926    6255 cache.go:162] opening:  /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1025 18:55:29.657259    6255 cache.go:162] opening:  /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2
	I1025 18:55:29.723831    6255 cache.go:162] opening:  /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2
	I1025 18:55:29.780598    6255 cache.go:162] opening:  /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1025 18:55:29.934339    6255 cache.go:157] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1025 18:55:29.934372    6255 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 889.010916ms
	I1025 18:55:29.934393    6255 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1025 18:55:31.284037    6255 start.go:128] duration metric: took 2.2383015s to createHost
	I1025 18:55:31.284068    6255 start.go:83] releasing machines lock for "no-preload-188000", held for 2.238397084s
	W1025 18:55:31.284099    6255 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:31.296739    6255 out.go:177] * Deleting "no-preload-188000" in qemu2 ...
	W1025 18:55:31.307752    6255 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:31.307760    6255 start.go:729] Will try again in 5 seconds ...
	I1025 18:55:32.886061    6255 cache.go:157] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1025 18:55:32.886127    6255 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.840810167s
	I1025 18:55:32.886154    6255 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1025 18:55:33.344530    6255 cache.go:157] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1025 18:55:33.344573    6255 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 4.299150333s
	I1025 18:55:33.344593    6255 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1025 18:55:33.889760    6255 cache.go:157] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1025 18:55:33.889821    6255 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 4.8444435s
	I1025 18:55:33.889840    6255 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1025 18:55:33.904239    6255 cache.go:157] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1025 18:55:33.904283    6255 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 4.859070125s
	I1025 18:55:33.904302    6255 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1025 18:55:34.017086    6255 cache.go:157] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1025 18:55:34.017112    6255 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 4.971883s
	I1025 18:55:34.017126    6255 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1025 18:55:36.308290    6255 start.go:360] acquireMachinesLock for no-preload-188000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:55:36.308916    6255 start.go:364] duration metric: took 529.667µs to acquireMachinesLock for "no-preload-188000"
	I1025 18:55:36.309069    6255 start.go:93] Provisioning new machine with config: &{Name:no-preload-188000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-188000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:55:36.309335    6255 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:55:36.315137    6255 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 18:55:36.366286    6255 start.go:159] libmachine.API.Create for "no-preload-188000" (driver="qemu2")
	I1025 18:55:36.366344    6255 client.go:168] LocalClient.Create starting
	I1025 18:55:36.366528    6255 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:55:36.366615    6255 main.go:141] libmachine: Decoding PEM data...
	I1025 18:55:36.366639    6255 main.go:141] libmachine: Parsing certificate...
	I1025 18:55:36.366712    6255 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:55:36.366771    6255 main.go:141] libmachine: Decoding PEM data...
	I1025 18:55:36.366787    6255 main.go:141] libmachine: Parsing certificate...
	I1025 18:55:36.367360    6255 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:55:36.534373    6255 main.go:141] libmachine: Creating SSH key...
	I1025 18:55:36.698113    6255 main.go:141] libmachine: Creating Disk image...
	I1025 18:55:36.698124    6255 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:55:36.698339    6255 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/disk.qcow2
	I1025 18:55:36.708859    6255 main.go:141] libmachine: STDOUT: 
	I1025 18:55:36.708944    6255 main.go:141] libmachine: STDERR: 
	I1025 18:55:36.709002    6255 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/disk.qcow2 +20000M
	I1025 18:55:36.717678    6255 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:55:36.717696    6255 main.go:141] libmachine: STDERR: 
	I1025 18:55:36.717712    6255 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/disk.qcow2
	I1025 18:55:36.717716    6255 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:55:36.717724    6255 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:55:36.717766    6255 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:94:45:0c:c2:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/disk.qcow2
	I1025 18:55:36.719923    6255 main.go:141] libmachine: STDOUT: 
	I1025 18:55:36.719939    6255 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:55:36.719953    6255 client.go:171] duration metric: took 353.594417ms to LocalClient.Create
	I1025 18:55:38.721740    6255 start.go:128] duration metric: took 2.412297917s to createHost
	I1025 18:55:38.721831    6255 start.go:83] releasing machines lock for "no-preload-188000", held for 2.412827125s
	W1025 18:55:38.722132    6255 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-188000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-188000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:38.735770    6255 out.go:201] 
	W1025 18:55:38.740816    6255 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:55:38.740848    6255 out.go:270] * 
	* 
	W1025 18:55:38.743475    6255 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:55:38.750709    6255 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-188000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000: exit status 7 (68.544292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-188000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-188000 create -f testdata/busybox.yaml: exit status 1 (29.62275ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-188000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-188000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000: exit status 7 (33.892833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-188000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000: exit status 7 (34.026708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-188000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-188000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-188000 describe deploy/metrics-server -n kube-system: exit status 1 (27.842084ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-188000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-188000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000: exit status 7 (34.1165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-710000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-710000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.861471916s)

                                                
                                                
-- stdout --
	* [embed-certs-710000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-710000" primary control-plane node in "embed-certs-710000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-710000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:55:39.690045    6319 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:55:39.690212    6319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:39.690215    6319 out.go:358] Setting ErrFile to fd 2...
	I1025 18:55:39.690218    6319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:39.690355    6319 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:55:39.691514    6319 out.go:352] Setting JSON to false
	I1025 18:55:39.709755    6319 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5109,"bootTime":1729902630,"procs":559,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:55:39.709840    6319 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:55:39.714893    6319 out.go:177] * [embed-certs-710000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:55:39.721792    6319 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:55:39.721845    6319 notify.go:220] Checking for updates...
	I1025 18:55:39.729781    6319 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:55:39.732835    6319 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:55:39.735833    6319 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:55:39.738841    6319 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:55:39.741797    6319 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:55:39.745239    6319 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:55:39.745325    6319 config.go:182] Loaded profile config "no-preload-188000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:55:39.745377    6319 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:55:39.749796    6319 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:55:39.756804    6319 start.go:297] selected driver: qemu2
	I1025 18:55:39.756809    6319 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:55:39.756816    6319 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:55:39.759366    6319 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:55:39.763795    6319 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:55:39.767881    6319 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:55:39.767903    6319 cni.go:84] Creating CNI manager for ""
	I1025 18:55:39.767923    6319 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:55:39.767930    6319 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 18:55:39.767974    6319 start.go:340] cluster config:
	{Name:embed-certs-710000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-710000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:55:39.772645    6319 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:39.779726    6319 out.go:177] * Starting "embed-certs-710000" primary control-plane node in "embed-certs-710000" cluster
	I1025 18:55:39.783838    6319 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:55:39.783865    6319 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:55:39.783875    6319 cache.go:56] Caching tarball of preloaded images
	I1025 18:55:39.783952    6319 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:55:39.783959    6319 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:55:39.784032    6319 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/embed-certs-710000/config.json ...
	I1025 18:55:39.784044    6319 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/embed-certs-710000/config.json: {Name:mk43838af225e33d05d67302863c014dfe64c1f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:55:39.784301    6319 start.go:360] acquireMachinesLock for embed-certs-710000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:55:39.784353    6319 start.go:364] duration metric: took 45.125µs to acquireMachinesLock for "embed-certs-710000"
	I1025 18:55:39.784366    6319 start.go:93] Provisioning new machine with config: &{Name:embed-certs-710000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-710000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:55:39.784396    6319 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:55:39.787793    6319 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 18:55:39.805568    6319 start.go:159] libmachine.API.Create for "embed-certs-710000" (driver="qemu2")
	I1025 18:55:39.805601    6319 client.go:168] LocalClient.Create starting
	I1025 18:55:39.805682    6319 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:55:39.805725    6319 main.go:141] libmachine: Decoding PEM data...
	I1025 18:55:39.805740    6319 main.go:141] libmachine: Parsing certificate...
	I1025 18:55:39.805783    6319 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:55:39.805814    6319 main.go:141] libmachine: Decoding PEM data...
	I1025 18:55:39.805825    6319 main.go:141] libmachine: Parsing certificate...
	I1025 18:55:39.806251    6319 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:55:39.966194    6319 main.go:141] libmachine: Creating SSH key...
	I1025 18:55:39.995978    6319 main.go:141] libmachine: Creating Disk image...
	I1025 18:55:39.995983    6319 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:55:39.996176    6319 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/disk.qcow2
	I1025 18:55:40.005892    6319 main.go:141] libmachine: STDOUT: 
	I1025 18:55:40.005909    6319 main.go:141] libmachine: STDERR: 
	I1025 18:55:40.005961    6319 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/disk.qcow2 +20000M
	I1025 18:55:40.014403    6319 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:55:40.014428    6319 main.go:141] libmachine: STDERR: 
	I1025 18:55:40.014440    6319 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/disk.qcow2
	I1025 18:55:40.014445    6319 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:55:40.014457    6319 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:55:40.014487    6319 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:78:6c:4a:6b:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/disk.qcow2
	I1025 18:55:40.016240    6319 main.go:141] libmachine: STDOUT: 
	I1025 18:55:40.016256    6319 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:55:40.016275    6319 client.go:171] duration metric: took 210.662958ms to LocalClient.Create
	I1025 18:55:42.017706    6319 start.go:128] duration metric: took 2.233249083s to createHost
	I1025 18:55:42.017722    6319 start.go:83] releasing machines lock for "embed-certs-710000", held for 2.233311167s
	W1025 18:55:42.017738    6319 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:42.030375    6319 out.go:177] * Deleting "embed-certs-710000" in qemu2 ...
	W1025 18:55:42.042200    6319 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:42.042207    6319 start.go:729] Will try again in 5 seconds ...
	I1025 18:55:47.044560    6319 start.go:360] acquireMachinesLock for embed-certs-710000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:55:47.063224    6319 start.go:364] duration metric: took 18.571292ms to acquireMachinesLock for "embed-certs-710000"
	I1025 18:55:47.063301    6319 start.go:93] Provisioning new machine with config: &{Name:embed-certs-710000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-710000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:55:47.063500    6319 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:55:47.076417    6319 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 18:55:47.120323    6319 start.go:159] libmachine.API.Create for "embed-certs-710000" (driver="qemu2")
	I1025 18:55:47.120383    6319 client.go:168] LocalClient.Create starting
	I1025 18:55:47.120559    6319 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:55:47.120635    6319 main.go:141] libmachine: Decoding PEM data...
	I1025 18:55:47.120651    6319 main.go:141] libmachine: Parsing certificate...
	I1025 18:55:47.120716    6319 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:55:47.120774    6319 main.go:141] libmachine: Decoding PEM data...
	I1025 18:55:47.120786    6319 main.go:141] libmachine: Parsing certificate...
	I1025 18:55:47.121311    6319 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:55:47.288812    6319 main.go:141] libmachine: Creating SSH key...
	I1025 18:55:47.456378    6319 main.go:141] libmachine: Creating Disk image...
	I1025 18:55:47.456388    6319 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:55:47.456607    6319 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/disk.qcow2
	I1025 18:55:47.467399    6319 main.go:141] libmachine: STDOUT: 
	I1025 18:55:47.467418    6319 main.go:141] libmachine: STDERR: 
	I1025 18:55:47.467487    6319 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/disk.qcow2 +20000M
	I1025 18:55:47.476682    6319 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:55:47.476701    6319 main.go:141] libmachine: STDERR: 
	I1025 18:55:47.476717    6319 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/disk.qcow2
	I1025 18:55:47.476720    6319 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:55:47.476729    6319 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:55:47.476767    6319 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:2a:de:54:85:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/disk.qcow2
	I1025 18:55:47.478605    6319 main.go:141] libmachine: STDOUT: 
	I1025 18:55:47.478621    6319 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:55:47.478635    6319 client.go:171] duration metric: took 358.236459ms to LocalClient.Create
	I1025 18:55:49.480915    6319 start.go:128] duration metric: took 2.417328292s to createHost
	I1025 18:55:49.481018    6319 start.go:83] releasing machines lock for "embed-certs-710000", held for 2.417691625s
	W1025 18:55:49.481405    6319 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-710000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:49.494024    6319 out.go:201] 
	W1025 18:55:49.498082    6319 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:55:49.498149    6319 out.go:270] * 
	* 
	W1025 18:55:49.500233    6319 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:55:49.510018    6319 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-710000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000: exit status 7 (56.252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-710000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-188000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-188000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.278122958s)

                                                
                                                
-- stdout --
	* [no-preload-188000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-188000" primary control-plane node in "no-preload-188000" cluster
	* Restarting existing qemu2 VM for "no-preload-188000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-188000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:55:41.858955    6345 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:55:41.859127    6345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:41.859130    6345 out.go:358] Setting ErrFile to fd 2...
	I1025 18:55:41.859132    6345 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:41.859292    6345 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:55:41.860357    6345 out.go:352] Setting JSON to false
	I1025 18:55:41.877791    6345 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5111,"bootTime":1729902630,"procs":559,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:55:41.877868    6345 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:55:41.882852    6345 out.go:177] * [no-preload-188000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:55:41.889809    6345 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:55:41.889865    6345 notify.go:220] Checking for updates...
	I1025 18:55:41.896746    6345 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:55:41.899833    6345 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:55:41.902834    6345 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:55:41.905765    6345 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:55:41.908741    6345 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:55:41.912085    6345 config.go:182] Loaded profile config "no-preload-188000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:55:41.912368    6345 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:55:41.916715    6345 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 18:55:41.923813    6345 start.go:297] selected driver: qemu2
	I1025 18:55:41.923820    6345 start.go:901] validating driver "qemu2" against &{Name:no-preload-188000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:no-preload-188000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:55:41.923880    6345 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:55:41.926476    6345 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:55:41.926504    6345 cni.go:84] Creating CNI manager for ""
	I1025 18:55:41.926525    6345 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:55:41.926548    6345 start.go:340] cluster config:
	{Name:no-preload-188000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-188000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:55:41.931073    6345 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:41.938773    6345 out.go:177] * Starting "no-preload-188000" primary control-plane node in "no-preload-188000" cluster
	I1025 18:55:41.941720    6345 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:55:41.941814    6345 cache.go:107] acquiring lock: {Name:mk3749aa17cfed9cec0374ffa4b00d003145c15a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:41.941818    6345 cache.go:107] acquiring lock: {Name:mk602aca643a1423de67ff61131bfdc38a2c1535 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:41.941849    6345 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/no-preload-188000/config.json ...
	I1025 18:55:41.941853    6345 cache.go:107] acquiring lock: {Name:mk35762609cbddade8fb31dae5aa57e82a6d5d0d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:41.941915    6345 cache.go:115] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 18:55:41.941921    6345 cache.go:115] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1025 18:55:41.941921    6345 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 110.791µs
	I1025 18:55:41.941927    6345 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 18:55:41.941926    6345 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 123.125µs
	I1025 18:55:41.941932    6345 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1025 18:55:41.941934    6345 cache.go:107] acquiring lock: {Name:mka9ceed83d14fc136c80a4795f3080b0c390d71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:41.941939    6345 cache.go:107] acquiring lock: {Name:mke9d888d896484e1e13d05374e77109c132a85a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:41.941952    6345 cache.go:107] acquiring lock: {Name:mk72a624cbcb3931b5b912dcdce1c52b13dd00e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:41.941955    6345 cache.go:107] acquiring lock: {Name:mk25e06b5d7d4fe07aea2503754da571415c828d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:41.941941    6345 cache.go:115] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1025 18:55:41.942019    6345 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1025 18:55:41.942026    6345 cache.go:107] acquiring lock: {Name:mkbd0fbeee336d3a2e0da319a5a45aabf30bebf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:41.942039    6345 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 192.75µs
	I1025 18:55:41.942057    6345 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1025 18:55:41.942051    6345 cache.go:115] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1025 18:55:41.942068    6345 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 117.958µs
	I1025 18:55:41.942077    6345 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1025 18:55:41.942061    6345 cache.go:115] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1025 18:55:41.942085    6345 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 151.292µs
	I1025 18:55:41.942088    6345 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1025 18:55:41.942079    6345 cache.go:115] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1025 18:55:41.942117    6345 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 177.209µs
	I1025 18:55:41.942127    6345 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1025 18:55:41.942142    6345 cache.go:115] /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1025 18:55:41.942151    6345 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 166.583µs
	I1025 18:55:41.942157    6345 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1025 18:55:41.942274    6345 start.go:360] acquireMachinesLock for no-preload-188000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:55:41.945592    6345 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1025 18:55:42.017808    6345 start.go:364] duration metric: took 75.515833ms to acquireMachinesLock for "no-preload-188000"
	I1025 18:55:42.017864    6345 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:55:42.017870    6345 fix.go:54] fixHost starting: 
	I1025 18:55:42.018023    6345 fix.go:112] recreateIfNeeded on no-preload-188000: state=Stopped err=<nil>
	W1025 18:55:42.018035    6345 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:55:42.026362    6345 out.go:177] * Restarting existing qemu2 VM for "no-preload-188000" ...
	I1025 18:55:42.033393    6345 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:55:42.033450    6345 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:94:45:0c:c2:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/disk.qcow2
	I1025 18:55:42.035655    6345 main.go:141] libmachine: STDOUT: 
	I1025 18:55:42.035676    6345 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:55:42.035706    6345 fix.go:56] duration metric: took 17.834542ms for fixHost
	I1025 18:55:42.035710    6345 start.go:83] releasing machines lock for "no-preload-188000", held for 17.869583ms
	W1025 18:55:42.035716    6345 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:55:42.035754    6345 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:42.035759    6345 start.go:729] Will try again in 5 seconds ...
	I1025 18:55:42.346122    6345 cache.go:162] opening:  /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1025 18:55:47.036429    6345 start.go:360] acquireMachinesLock for no-preload-188000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:55:47.036963    6345 start.go:364] duration metric: took 444µs to acquireMachinesLock for "no-preload-188000"
	I1025 18:55:47.037096    6345 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:55:47.037118    6345 fix.go:54] fixHost starting: 
	I1025 18:55:47.037907    6345 fix.go:112] recreateIfNeeded on no-preload-188000: state=Stopped err=<nil>
	W1025 18:55:47.037933    6345 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:55:47.048218    6345 out.go:177] * Restarting existing qemu2 VM for "no-preload-188000" ...
	I1025 18:55:47.051383    6345 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:55:47.051577    6345 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:94:45:0c:c2:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/no-preload-188000/disk.qcow2
	I1025 18:55:47.062972    6345 main.go:141] libmachine: STDOUT: 
	I1025 18:55:47.063029    6345 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:55:47.063112    6345 fix.go:56] duration metric: took 25.99425ms for fixHost
	I1025 18:55:47.063129    6345 start.go:83] releasing machines lock for "no-preload-188000", held for 26.142333ms
	W1025 18:55:47.063322    6345 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-188000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-188000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:47.079415    6345 out.go:201] 
	W1025 18:55:47.083577    6345 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:55:47.083612    6345 out.go:270] * 
	* 
	W1025 18:55:47.084998    6345 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:55:47.095418    6345 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-188000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000: exit status 7 (53.825334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-188000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000: exit status 7 (39.479125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-188000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-188000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-188000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (31.531083ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-188000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-188000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000: exit status 7 (38.740625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-188000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000: exit status 7 (34.704167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-188000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-188000 --alsologtostderr -v=1: exit status 83 (47.867625ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-188000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-188000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:55:47.391135    6373 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:55:47.391329    6373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:47.391333    6373 out.go:358] Setting ErrFile to fd 2...
	I1025 18:55:47.391335    6373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:47.391489    6373 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:55:47.391720    6373 out.go:352] Setting JSON to false
	I1025 18:55:47.391727    6373 mustload.go:65] Loading cluster: no-preload-188000
	I1025 18:55:47.391950    6373 config.go:182] Loaded profile config "no-preload-188000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:55:47.396429    6373 out.go:177] * The control-plane node no-preload-188000 host is not running: state=Stopped
	I1025 18:55:47.400252    6373 out.go:177]   To start a cluster, run: "minikube start -p no-preload-188000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-188000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000: exit status 7 (34.673333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-188000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000: exit status 7 (34.367916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-188000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-332000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-332000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (11.752319667s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-332000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-332000" primary control-plane node in "default-k8s-diff-port-332000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-332000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:55:47.850330    6400 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:55:47.850494    6400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:47.850498    6400 out.go:358] Setting ErrFile to fd 2...
	I1025 18:55:47.850501    6400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:47.850624    6400 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:55:47.851815    6400 out.go:352] Setting JSON to false
	I1025 18:55:47.869601    6400 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5117,"bootTime":1729902630,"procs":561,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:55:47.869684    6400 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:55:47.874468    6400 out.go:177] * [default-k8s-diff-port-332000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:55:47.881290    6400 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:55:47.881328    6400 notify.go:220] Checking for updates...
	I1025 18:55:47.889405    6400 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:55:47.892440    6400 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:55:47.895399    6400 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:55:47.898437    6400 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:55:47.901353    6400 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:55:47.904826    6400 config.go:182] Loaded profile config "embed-certs-710000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:55:47.904887    6400 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:55:47.904940    6400 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:55:47.908385    6400 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:55:47.915448    6400 start.go:297] selected driver: qemu2
	I1025 18:55:47.915457    6400 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:55:47.915464    6400 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:55:47.918040    6400 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 18:55:47.921392    6400 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:55:47.924475    6400 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:55:47.924505    6400 cni.go:84] Creating CNI manager for ""
	I1025 18:55:47.924529    6400 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:55:47.924535    6400 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 18:55:47.924562    6400 start.go:340] cluster config:
	{Name:default-k8s-diff-port-332000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:55:47.929194    6400 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:47.937440    6400 out.go:177] * Starting "default-k8s-diff-port-332000" primary control-plane node in "default-k8s-diff-port-332000" cluster
	I1025 18:55:47.941394    6400 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:55:47.941417    6400 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:55:47.941433    6400 cache.go:56] Caching tarball of preloaded images
	I1025 18:55:47.941517    6400 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:55:47.941523    6400 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:55:47.941594    6400 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/default-k8s-diff-port-332000/config.json ...
	I1025 18:55:47.941606    6400 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/default-k8s-diff-port-332000/config.json: {Name:mkdd538fca2730f6d88643e6d674ed72a6e88a73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:55:47.941998    6400 start.go:360] acquireMachinesLock for default-k8s-diff-port-332000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:55:49.481131    6400 start.go:364] duration metric: took 1.539053541s to acquireMachinesLock for "default-k8s-diff-port-332000"
	I1025 18:55:49.481355    6400 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-332000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:55:49.481588    6400 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:55:49.486190    6400 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 18:55:49.535106    6400 start.go:159] libmachine.API.Create for "default-k8s-diff-port-332000" (driver="qemu2")
	I1025 18:55:49.535170    6400 client.go:168] LocalClient.Create starting
	I1025 18:55:49.535342    6400 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:55:49.535421    6400 main.go:141] libmachine: Decoding PEM data...
	I1025 18:55:49.535439    6400 main.go:141] libmachine: Parsing certificate...
	I1025 18:55:49.535506    6400 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:55:49.535565    6400 main.go:141] libmachine: Decoding PEM data...
	I1025 18:55:49.535584    6400 main.go:141] libmachine: Parsing certificate...
	I1025 18:55:49.536256    6400 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:55:49.704505    6400 main.go:141] libmachine: Creating SSH key...
	I1025 18:55:49.843134    6400 main.go:141] libmachine: Creating Disk image...
	I1025 18:55:49.843144    6400 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:55:49.843654    6400 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/disk.qcow2
	I1025 18:55:49.854022    6400 main.go:141] libmachine: STDOUT: 
	I1025 18:55:49.854047    6400 main.go:141] libmachine: STDERR: 
	I1025 18:55:49.854102    6400 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/disk.qcow2 +20000M
	I1025 18:55:49.867336    6400 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:55:49.867359    6400 main.go:141] libmachine: STDERR: 
	I1025 18:55:49.867377    6400 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/disk.qcow2
	I1025 18:55:49.867385    6400 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:55:49.867395    6400 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:55:49.867428    6400 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:cc:f6:8a:f2:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/disk.qcow2
	I1025 18:55:49.869358    6400 main.go:141] libmachine: STDOUT: 
	I1025 18:55:49.869373    6400 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:55:49.869392    6400 client.go:171] duration metric: took 334.206875ms to LocalClient.Create
	I1025 18:55:51.871609    6400 start.go:128] duration metric: took 2.389939709s to createHost
	I1025 18:55:51.871645    6400 start.go:83] releasing machines lock for "default-k8s-diff-port-332000", held for 2.390428875s
	W1025 18:55:51.871659    6400 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:51.880487    6400 out.go:177] * Deleting "default-k8s-diff-port-332000" in qemu2 ...
	W1025 18:55:51.890729    6400 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:51.890738    6400 start.go:729] Will try again in 5 seconds ...
	I1025 18:55:56.893174    6400 start.go:360] acquireMachinesLock for default-k8s-diff-port-332000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:55:56.893645    6400 start.go:364] duration metric: took 350.459µs to acquireMachinesLock for "default-k8s-diff-port-332000"
	I1025 18:55:56.893801    6400 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-332000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:55:56.894066    6400 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:55:56.903832    6400 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 18:55:56.956430    6400 start.go:159] libmachine.API.Create for "default-k8s-diff-port-332000" (driver="qemu2")
	I1025 18:55:56.956512    6400 client.go:168] LocalClient.Create starting
	I1025 18:55:56.956687    6400 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:55:56.956787    6400 main.go:141] libmachine: Decoding PEM data...
	I1025 18:55:56.956816    6400 main.go:141] libmachine: Parsing certificate...
	I1025 18:55:56.956919    6400 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:55:56.956985    6400 main.go:141] libmachine: Decoding PEM data...
	I1025 18:55:56.957002    6400 main.go:141] libmachine: Parsing certificate...
	I1025 18:55:56.957645    6400 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:55:57.131501    6400 main.go:141] libmachine: Creating SSH key...
	I1025 18:55:57.486091    6400 main.go:141] libmachine: Creating Disk image...
	I1025 18:55:57.486105    6400 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:55:57.486296    6400 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/disk.qcow2
	I1025 18:55:57.496711    6400 main.go:141] libmachine: STDOUT: 
	I1025 18:55:57.496738    6400 main.go:141] libmachine: STDERR: 
	I1025 18:55:57.496809    6400 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/disk.qcow2 +20000M
	I1025 18:55:57.505343    6400 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:55:57.505358    6400 main.go:141] libmachine: STDERR: 
	I1025 18:55:57.505375    6400 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/disk.qcow2
	I1025 18:55:57.505382    6400 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:55:57.505390    6400 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:55:57.505423    6400 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:8f:0a:37:e2:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/disk.qcow2
	I1025 18:55:57.507222    6400 main.go:141] libmachine: STDOUT: 
	I1025 18:55:57.507234    6400 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:55:57.507248    6400 client.go:171] duration metric: took 550.717208ms to LocalClient.Create
	I1025 18:55:59.509673    6400 start.go:128] duration metric: took 2.615409958s to createHost
	I1025 18:55:59.509786    6400 start.go:83] releasing machines lock for "default-k8s-diff-port-332000", held for 2.616055334s
	W1025 18:55:59.510172    6400 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-332000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-332000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:59.518772    6400 out.go:201] 
	W1025 18:55:59.531826    6400 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:55:59.531865    6400 out.go:270] * 
	* 
	W1025 18:55:59.534442    6400 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:55:59.546741    6400 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-332000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000: exit status 7 (68.964542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-710000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-710000 create -f testdata/busybox.yaml: exit status 1 (31.873208ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-710000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-710000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000: exit status 7 (39.924958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-710000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000: exit status 7 (39.907042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-710000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-710000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-710000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-710000 describe deploy/metrics-server -n kube-system: exit status 1 (28.706875ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-710000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-710000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000: exit status 7 (34.268584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-710000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (7.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-710000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-710000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (7.61903225s)

                                                
                                                
-- stdout --
	* [embed-certs-710000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-710000" primary control-plane node in "embed-certs-710000" cluster
	* Restarting existing qemu2 VM for "embed-certs-710000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-710000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:55:51.998280    6436 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:55:51.998451    6436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:51.998454    6436 out.go:358] Setting ErrFile to fd 2...
	I1025 18:55:51.998456    6436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:51.998591    6436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:55:51.999679    6436 out.go:352] Setting JSON to false
	I1025 18:55:52.017371    6436 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5122,"bootTime":1729902630,"procs":559,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:55:52.017439    6436 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:55:52.022513    6436 out.go:177] * [embed-certs-710000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:55:52.030441    6436 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:55:52.030500    6436 notify.go:220] Checking for updates...
	I1025 18:55:52.037480    6436 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:55:52.040494    6436 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:55:52.043487    6436 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:55:52.046524    6436 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:55:52.047886    6436 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:55:52.050878    6436 config.go:182] Loaded profile config "embed-certs-710000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:55:52.051174    6436 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:55:52.055464    6436 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 18:55:52.066465    6436 start.go:297] selected driver: qemu2
	I1025 18:55:52.066472    6436 start.go:901] validating driver "qemu2" against &{Name:embed-certs-710000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:embed-certs-710000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:55:52.066524    6436 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:55:52.069111    6436 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:55:52.069140    6436 cni.go:84] Creating CNI manager for ""
	I1025 18:55:52.069162    6436 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:55:52.069187    6436 start.go:340] cluster config:
	{Name:embed-certs-710000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-710000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:55:52.073739    6436 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:55:52.081487    6436 out.go:177] * Starting "embed-certs-710000" primary control-plane node in "embed-certs-710000" cluster
	I1025 18:55:52.085428    6436 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:55:52.085442    6436 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:55:52.085450    6436 cache.go:56] Caching tarball of preloaded images
	I1025 18:55:52.085530    6436 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:55:52.085538    6436 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:55:52.085596    6436 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/embed-certs-710000/config.json ...
	I1025 18:55:52.086048    6436 start.go:360] acquireMachinesLock for embed-certs-710000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:55:52.086099    6436 start.go:364] duration metric: took 44.958µs to acquireMachinesLock for "embed-certs-710000"
	I1025 18:55:52.086108    6436 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:55:52.086113    6436 fix.go:54] fixHost starting: 
	I1025 18:55:52.086238    6436 fix.go:112] recreateIfNeeded on embed-certs-710000: state=Stopped err=<nil>
	W1025 18:55:52.086248    6436 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:55:52.094490    6436 out.go:177] * Restarting existing qemu2 VM for "embed-certs-710000" ...
	I1025 18:55:52.098497    6436 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:55:52.098540    6436 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:2a:de:54:85:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/disk.qcow2
	I1025 18:55:52.100876    6436 main.go:141] libmachine: STDOUT: 
	I1025 18:55:52.100896    6436 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:55:52.100927    6436 fix.go:56] duration metric: took 14.812ms for fixHost
	I1025 18:55:52.100931    6436 start.go:83] releasing machines lock for "embed-certs-710000", held for 14.826959ms
	W1025 18:55:52.100937    6436 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:55:52.100974    6436 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:52.100978    6436 start.go:729] Will try again in 5 seconds ...
	I1025 18:55:57.103258    6436 start.go:360] acquireMachinesLock for embed-certs-710000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:55:59.509972    6436 start.go:364] duration metric: took 2.406579625s to acquireMachinesLock for "embed-certs-710000"
	I1025 18:55:59.510153    6436 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:55:59.510173    6436 fix.go:54] fixHost starting: 
	I1025 18:55:59.510906    6436 fix.go:112] recreateIfNeeded on embed-certs-710000: state=Stopped err=<nil>
	W1025 18:55:59.510934    6436 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:55:59.527751    6436 out.go:177] * Restarting existing qemu2 VM for "embed-certs-710000" ...
	I1025 18:55:59.535699    6436 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:55:59.535990    6436 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:2a:de:54:85:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/embed-certs-710000/disk.qcow2
	I1025 18:55:59.546214    6436 main.go:141] libmachine: STDOUT: 
	I1025 18:55:59.546270    6436 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:55:59.546363    6436 fix.go:56] duration metric: took 36.190875ms for fixHost
	I1025 18:55:59.546383    6436 start.go:83] releasing machines lock for "embed-certs-710000", held for 36.371ms
	W1025 18:55:59.546570    6436 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-710000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-710000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:55:59.558811    6436 out.go:201] 
	W1025 18:55:59.562761    6436 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:55:59.562793    6436 out.go:270] * 
	* 
	W1025 18:55:59.564888    6436 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:55:59.574780    6436 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-710000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000: exit status 7 (59.252459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-710000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (7.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-332000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-332000 create -f testdata/busybox.yaml: exit status 1 (31.169292ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-332000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-332000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000: exit status 7 (35.064084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-332000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000: exit status 7 (38.520333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-710000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000: exit status 7 (38.422084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-710000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-710000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-710000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-710000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (30.294542ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-710000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-710000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000: exit status 7 (35.349875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-710000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-332000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-332000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-332000 describe deploy/metrics-server -n kube-system: exit status 1 (29.734292ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-332000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-332000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000: exit status 7 (37.895459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-710000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000: exit status 7 (35.370666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-710000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-710000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-710000 --alsologtostderr -v=1: exit status 83 (54.13975ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-710000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-710000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:55:59.878061    6471 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:55:59.878271    6471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:59.878275    6471 out.go:358] Setting ErrFile to fd 2...
	I1025 18:55:59.878280    6471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:55:59.878399    6471 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:55:59.878629    6471 out.go:352] Setting JSON to false
	I1025 18:55:59.878641    6471 mustload.go:65] Loading cluster: embed-certs-710000
	I1025 18:55:59.878884    6471 config.go:182] Loaded profile config "embed-certs-710000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:55:59.883222    6471 out.go:177] * The control-plane node embed-certs-710000 host is not running: state=Stopped
	I1025 18:55:59.891091    6471 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-710000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-710000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000: exit status 7 (34.749958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-710000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000: exit status 7 (32.806167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-710000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-297000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-297000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (10.0266035s)

                                                
                                                
-- stdout --
	* [newest-cni-297000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-297000" primary control-plane node in "newest-cni-297000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-297000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:56:00.218796    6494 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:56:00.218971    6494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:56:00.218974    6494 out.go:358] Setting ErrFile to fd 2...
	I1025 18:56:00.218977    6494 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:56:00.219079    6494 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:56:00.220254    6494 out.go:352] Setting JSON to false
	I1025 18:56:00.237813    6494 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5130,"bootTime":1729902630,"procs":559,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:56:00.237894    6494 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:56:00.243351    6494 out.go:177] * [newest-cni-297000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:56:00.250328    6494 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:56:00.250381    6494 notify.go:220] Checking for updates...
	I1025 18:56:00.257263    6494 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:56:00.260285    6494 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:56:00.263362    6494 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:56:00.266276    6494 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:56:00.269301    6494 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:56:00.272613    6494 config.go:182] Loaded profile config "default-k8s-diff-port-332000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:56:00.272687    6494 config.go:182] Loaded profile config "multinode-293000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:56:00.272732    6494 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:56:00.277196    6494 out.go:177] * Using the qemu2 driver based on user configuration
	I1025 18:56:00.284258    6494 start.go:297] selected driver: qemu2
	I1025 18:56:00.284265    6494 start.go:901] validating driver "qemu2" against <nil>
	I1025 18:56:00.284272    6494 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:56:00.286864    6494 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1025 18:56:00.286902    6494 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1025 18:56:00.295291    6494 out.go:177] * Automatically selected the socket_vmnet network
	I1025 18:56:00.298398    6494 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 18:56:00.298422    6494 cni.go:84] Creating CNI manager for ""
	I1025 18:56:00.298452    6494 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:56:00.298457    6494 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 18:56:00.298498    6494 start.go:340] cluster config:
	{Name:newest-cni-297000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-297000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:56:00.303403    6494 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:56:00.311291    6494 out.go:177] * Starting "newest-cni-297000" primary control-plane node in "newest-cni-297000" cluster
	I1025 18:56:00.314233    6494 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:56:00.314253    6494 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:56:00.314261    6494 cache.go:56] Caching tarball of preloaded images
	I1025 18:56:00.314368    6494 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:56:00.314375    6494 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:56:00.314439    6494 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/newest-cni-297000/config.json ...
	I1025 18:56:00.314450    6494 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/newest-cni-297000/config.json: {Name:mk7f2ff08de26c159b689fee6ce35bd5d269c9b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 18:56:00.314778    6494 start.go:360] acquireMachinesLock for newest-cni-297000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:56:00.314829    6494 start.go:364] duration metric: took 45µs to acquireMachinesLock for "newest-cni-297000"
	I1025 18:56:00.314843    6494 start.go:93] Provisioning new machine with config: &{Name:newest-cni-297000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-297000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:56:00.314881    6494 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:56:00.322264    6494 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 18:56:00.340752    6494 start.go:159] libmachine.API.Create for "newest-cni-297000" (driver="qemu2")
	I1025 18:56:00.340785    6494 client.go:168] LocalClient.Create starting
	I1025 18:56:00.340855    6494 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:56:00.340898    6494 main.go:141] libmachine: Decoding PEM data...
	I1025 18:56:00.340911    6494 main.go:141] libmachine: Parsing certificate...
	I1025 18:56:00.340948    6494 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:56:00.340978    6494 main.go:141] libmachine: Decoding PEM data...
	I1025 18:56:00.340985    6494 main.go:141] libmachine: Parsing certificate...
	I1025 18:56:00.341364    6494 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:56:00.497397    6494 main.go:141] libmachine: Creating SSH key...
	I1025 18:56:00.686658    6494 main.go:141] libmachine: Creating Disk image...
	I1025 18:56:00.686667    6494 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:56:00.686858    6494 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/disk.qcow2
	I1025 18:56:00.696889    6494 main.go:141] libmachine: STDOUT: 
	I1025 18:56:00.696908    6494 main.go:141] libmachine: STDERR: 
	I1025 18:56:00.696969    6494 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/disk.qcow2 +20000M
	I1025 18:56:00.705386    6494 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:56:00.705402    6494 main.go:141] libmachine: STDERR: 
	I1025 18:56:00.705432    6494 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/disk.qcow2
	I1025 18:56:00.705438    6494 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:56:00.705453    6494 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:56:00.705488    6494 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:f1:f9:67:81:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/disk.qcow2
	I1025 18:56:00.707224    6494 main.go:141] libmachine: STDOUT: 
	I1025 18:56:00.707237    6494 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:56:00.707256    6494 client.go:171] duration metric: took 366.454959ms to LocalClient.Create
	I1025 18:56:02.709498    6494 start.go:128] duration metric: took 2.394537583s to createHost
	I1025 18:56:02.709560    6494 start.go:83] releasing machines lock for "newest-cni-297000", held for 2.394664084s
	W1025 18:56:02.709619    6494 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:56:02.727095    6494 out.go:177] * Deleting "newest-cni-297000" in qemu2 ...
	W1025 18:56:02.755655    6494 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:56:02.755685    6494 start.go:729] Will try again in 5 seconds ...
	I1025 18:56:07.757922    6494 start.go:360] acquireMachinesLock for newest-cni-297000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:56:07.769635    6494 start.go:364] duration metric: took 11.644791ms to acquireMachinesLock for "newest-cni-297000"
	I1025 18:56:07.769691    6494 start.go:93] Provisioning new machine with config: &{Name:newest-cni-297000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-297000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1025 18:56:07.769841    6494 start.go:125] createHost starting for "" (driver="qemu2")
	I1025 18:56:07.778753    6494 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1025 18:56:07.825598    6494 start.go:159] libmachine.API.Create for "newest-cni-297000" (driver="qemu2")
	I1025 18:56:07.825650    6494 client.go:168] LocalClient.Create starting
	I1025 18:56:07.825769    6494 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/ca.pem
	I1025 18:56:07.825851    6494 main.go:141] libmachine: Decoding PEM data...
	I1025 18:56:07.825868    6494 main.go:141] libmachine: Parsing certificate...
	I1025 18:56:07.825929    6494 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19868-1112/.minikube/certs/cert.pem
	I1025 18:56:07.825988    6494 main.go:141] libmachine: Decoding PEM data...
	I1025 18:56:07.826000    6494 main.go:141] libmachine: Parsing certificate...
	I1025 18:56:07.826584    6494 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso...
	I1025 18:56:07.995045    6494 main.go:141] libmachine: Creating SSH key...
	I1025 18:56:08.150726    6494 main.go:141] libmachine: Creating Disk image...
	I1025 18:56:08.150738    6494 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1025 18:56:08.150954    6494 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/disk.qcow2.raw /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/disk.qcow2
	I1025 18:56:08.162437    6494 main.go:141] libmachine: STDOUT: 
	I1025 18:56:08.162459    6494 main.go:141] libmachine: STDERR: 
	I1025 18:56:08.162549    6494 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/disk.qcow2 +20000M
	I1025 18:56:08.172126    6494 main.go:141] libmachine: STDOUT: Image resized.
	
	I1025 18:56:08.172153    6494 main.go:141] libmachine: STDERR: 
	I1025 18:56:08.172174    6494 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/disk.qcow2
	I1025 18:56:08.172178    6494 main.go:141] libmachine: Starting QEMU VM...
	I1025 18:56:08.172197    6494 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:56:08.172228    6494 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:6a:6a:1b:38:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/disk.qcow2
	I1025 18:56:08.174350    6494 main.go:141] libmachine: STDOUT: 
	I1025 18:56:08.174368    6494 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:56:08.174384    6494 client.go:171] duration metric: took 348.720917ms to LocalClient.Create
	I1025 18:56:10.176796    6494 start.go:128] duration metric: took 2.40686s to createHost
	I1025 18:56:10.176878    6494 start.go:83] releasing machines lock for "newest-cni-297000", held for 2.407163583s
	W1025 18:56:10.177298    6494 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-297000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-297000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:56:10.186986    6494 out.go:201] 
	W1025 18:56:10.190165    6494 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:56:10.190201    6494 out.go:270] * 
	* 
	W1025 18:56:10.192473    6494 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:56:10.205037    6494 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-297000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-297000 -n newest-cni-297000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-297000 -n newest-cni-297000: exit status 7 (73.260417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-297000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-332000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-332000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.843761125s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-332000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-332000" primary control-plane node in "default-k8s-diff-port-332000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-332000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-332000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:56:01.990364    6516 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:56:01.990523    6516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:56:01.990526    6516 out.go:358] Setting ErrFile to fd 2...
	I1025 18:56:01.990528    6516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:56:01.990657    6516 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:56:01.991747    6516 out.go:352] Setting JSON to false
	I1025 18:56:02.009423    6516 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5131,"bootTime":1729902630,"procs":557,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:56:02.009487    6516 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:56:02.014394    6516 out.go:177] * [default-k8s-diff-port-332000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:56:02.021324    6516 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:56:02.021386    6516 notify.go:220] Checking for updates...
	I1025 18:56:02.029256    6516 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:56:02.032336    6516 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:56:02.033633    6516 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:56:02.036339    6516 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:56:02.039333    6516 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:56:02.042639    6516 config.go:182] Loaded profile config "default-k8s-diff-port-332000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:56:02.042906    6516 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:56:02.051467    6516 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 18:56:02.054296    6516 start.go:297] selected driver: qemu2
	I1025 18:56:02.054311    6516 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-332000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-332000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:56:02.054402    6516 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:56:02.056837    6516 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 18:56:02.056862    6516 cni.go:84] Creating CNI manager for ""
	I1025 18:56:02.056883    6516 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:56:02.056909    6516 start.go:340] cluster config:
	{Name:default-k8s-diff-port-332000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-332000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:56:02.060858    6516 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:56:02.069331    6516 out.go:177] * Starting "default-k8s-diff-port-332000" primary control-plane node in "default-k8s-diff-port-332000" cluster
	I1025 18:56:02.072245    6516 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:56:02.072261    6516 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:56:02.072266    6516 cache.go:56] Caching tarball of preloaded images
	I1025 18:56:02.072312    6516 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:56:02.072318    6516 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:56:02.072372    6516 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/default-k8s-diff-port-332000/config.json ...
	I1025 18:56:02.072844    6516 start.go:360] acquireMachinesLock for default-k8s-diff-port-332000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:56:02.709704    6516 start.go:364] duration metric: took 636.821458ms to acquireMachinesLock for "default-k8s-diff-port-332000"
	I1025 18:56:02.709857    6516 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:56:02.709918    6516 fix.go:54] fixHost starting: 
	I1025 18:56:02.710585    6516 fix.go:112] recreateIfNeeded on default-k8s-diff-port-332000: state=Stopped err=<nil>
	W1025 18:56:02.710630    6516 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:56:02.719107    6516 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-332000" ...
	I1025 18:56:02.730208    6516 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:56:02.730414    6516 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:8f:0a:37:e2:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/disk.qcow2
	I1025 18:56:02.743625    6516 main.go:141] libmachine: STDOUT: 
	I1025 18:56:02.743686    6516 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:56:02.743805    6516 fix.go:56] duration metric: took 33.915791ms for fixHost
	I1025 18:56:02.743825    6516 start.go:83] releasing machines lock for "default-k8s-diff-port-332000", held for 34.089208ms
	W1025 18:56:02.743854    6516 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:56:02.744037    6516 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:56:02.744054    6516 start.go:729] Will try again in 5 seconds ...
	I1025 18:56:07.746516    6516 start.go:360] acquireMachinesLock for default-k8s-diff-port-332000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:56:07.747095    6516 start.go:364] duration metric: took 483.25µs to acquireMachinesLock for "default-k8s-diff-port-332000"
	I1025 18:56:07.747246    6516 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:56:07.747267    6516 fix.go:54] fixHost starting: 
	I1025 18:56:07.748090    6516 fix.go:112] recreateIfNeeded on default-k8s-diff-port-332000: state=Stopped err=<nil>
	W1025 18:56:07.748116    6516 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:56:07.755734    6516 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-332000" ...
	I1025 18:56:07.758719    6516 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:56:07.759000    6516 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:8f:0a:37:e2:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/default-k8s-diff-port-332000/disk.qcow2
	I1025 18:56:07.769404    6516 main.go:141] libmachine: STDOUT: 
	I1025 18:56:07.769472    6516 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:56:07.769547    6516 fix.go:56] duration metric: took 22.282833ms for fixHost
	I1025 18:56:07.769570    6516 start.go:83] releasing machines lock for "default-k8s-diff-port-332000", held for 22.45075ms
	W1025 18:56:07.769782    6516 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-332000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-332000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:56:07.781622    6516 out.go:201] 
	W1025 18:56:07.785828    6516 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:56:07.785861    6516 out.go:270] * 
	* 
	W1025 18:56:07.787946    6516 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:56:07.796680    6516 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-332000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000: exit status 7 (59.018583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-332000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000: exit status 7 (40.037084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-332000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-332000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-332000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (31.364541ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-332000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-332000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000: exit status 7 (39.625667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-332000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000: exit status 7 (34.056958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-332000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-332000 --alsologtostderr -v=1: exit status 83 (46.608792ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-332000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-332000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:56:08.091050    6539 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:56:08.091269    6539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:56:08.091275    6539 out.go:358] Setting ErrFile to fd 2...
	I1025 18:56:08.091277    6539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:56:08.091432    6539 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:56:08.091645    6539 out.go:352] Setting JSON to false
	I1025 18:56:08.091654    6539 mustload.go:65] Loading cluster: default-k8s-diff-port-332000
	I1025 18:56:08.091877    6539 config.go:182] Loaded profile config "default-k8s-diff-port-332000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:56:08.095536    6539 out.go:177] * The control-plane node default-k8s-diff-port-332000 host is not running: state=Stopped
	I1025 18:56:08.099661    6539 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-332000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-332000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000: exit status 7 (33.232291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-332000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000: exit status 7 (33.670458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-332000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-297000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-297000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.19100225s)

                                                
                                                
-- stdout --
	* [newest-cni-297000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-297000" primary control-plane node in "newest-cni-297000" cluster
	* Restarting existing qemu2 VM for "newest-cni-297000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-297000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:56:12.323141    6577 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:56:12.323303    6577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:56:12.323306    6577 out.go:358] Setting ErrFile to fd 2...
	I1025 18:56:12.323310    6577 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:56:12.323449    6577 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:56:12.324475    6577 out.go:352] Setting JSON to false
	I1025 18:56:12.343024    6577 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5142,"bootTime":1729902630,"procs":557,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 18:56:12.343092    6577 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 18:56:12.347985    6577 out.go:177] * [newest-cni-297000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 18:56:12.353951    6577 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 18:56:12.353992    6577 notify.go:220] Checking for updates...
	I1025 18:56:12.364896    6577 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 18:56:12.367917    6577 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 18:56:12.370963    6577 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 18:56:12.373843    6577 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 18:56:12.376922    6577 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 18:56:12.380182    6577 config.go:182] Loaded profile config "newest-cni-297000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:56:12.380456    6577 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 18:56:12.383864    6577 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 18:56:12.390893    6577 start.go:297] selected driver: qemu2
	I1025 18:56:12.390898    6577 start.go:901] validating driver "qemu2" against &{Name:newest-cni-297000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:newest-cni-297000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:56:12.390956    6577 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 18:56:12.393615    6577 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 18:56:12.393642    6577 cni.go:84] Creating CNI manager for ""
	I1025 18:56:12.393664    6577 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 18:56:12.393707    6577 start.go:340] cluster config:
	{Name:newest-cni-297000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-297000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 18:56:12.398238    6577 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 18:56:12.406883    6577 out.go:177] * Starting "newest-cni-297000" primary control-plane node in "newest-cni-297000" cluster
	I1025 18:56:12.409814    6577 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 18:56:12.409830    6577 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 18:56:12.409839    6577 cache.go:56] Caching tarball of preloaded images
	I1025 18:56:12.409922    6577 preload.go:172] Found /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1025 18:56:12.409928    6577 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 18:56:12.409985    6577 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/newest-cni-297000/config.json ...
	I1025 18:56:12.410408    6577 start.go:360] acquireMachinesLock for newest-cni-297000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:56:12.410454    6577 start.go:364] duration metric: took 40.291µs to acquireMachinesLock for "newest-cni-297000"
	I1025 18:56:12.410463    6577 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:56:12.410468    6577 fix.go:54] fixHost starting: 
	I1025 18:56:12.410583    6577 fix.go:112] recreateIfNeeded on newest-cni-297000: state=Stopped err=<nil>
	W1025 18:56:12.410592    6577 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:56:12.413969    6577 out.go:177] * Restarting existing qemu2 VM for "newest-cni-297000" ...
	I1025 18:56:12.420886    6577 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:56:12.420931    6577 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:6a:6a:1b:38:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/disk.qcow2
	I1025 18:56:12.423093    6577 main.go:141] libmachine: STDOUT: 
	I1025 18:56:12.423108    6577 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:56:12.423136    6577 fix.go:56] duration metric: took 12.66775ms for fixHost
	I1025 18:56:12.423142    6577 start.go:83] releasing machines lock for "newest-cni-297000", held for 12.683209ms
	W1025 18:56:12.423148    6577 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:56:12.423189    6577 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:56:12.423194    6577 start.go:729] Will try again in 5 seconds ...
	I1025 18:56:17.425518    6577 start.go:360] acquireMachinesLock for newest-cni-297000: {Name:mk4b99cd772ed5573cf21ee79cbc172313699dba Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1025 18:56:17.425979    6577 start.go:364] duration metric: took 357.125µs to acquireMachinesLock for "newest-cni-297000"
	I1025 18:56:17.426144    6577 start.go:96] Skipping create...Using existing machine configuration
	I1025 18:56:17.426163    6577 fix.go:54] fixHost starting: 
	I1025 18:56:17.426920    6577 fix.go:112] recreateIfNeeded on newest-cni-297000: state=Stopped err=<nil>
	W1025 18:56:17.426947    6577 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 18:56:17.435578    6577 out.go:177] * Restarting existing qemu2 VM for "newest-cni-297000" ...
	I1025 18:56:17.438718    6577 qemu.go:418] Using hvf for hardware acceleration
	I1025 18:56:17.438927    6577 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:6a:6a:1b:38:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19868-1112/.minikube/machines/newest-cni-297000/disk.qcow2
	I1025 18:56:17.449250    6577 main.go:141] libmachine: STDOUT: 
	I1025 18:56:17.449327    6577 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1025 18:56:17.449430    6577 fix.go:56] duration metric: took 23.264166ms for fixHost
	I1025 18:56:17.449450    6577 start.go:83] releasing machines lock for "newest-cni-297000", held for 23.442375ms
	W1025 18:56:17.449672    6577 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-297000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-297000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1025 18:56:17.455666    6577 out.go:201] 
	W1025 18:56:17.459690    6577 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1025 18:56:17.459715    6577 out.go:270] * 
	* 
	W1025 18:56:17.462294    6577 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 18:56:17.469741    6577 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-297000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-297000 -n newest-cni-297000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-297000 -n newest-cni-297000: exit status 7 (78.180834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-297000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-297000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-297000 -n newest-cni-297000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-297000 -n newest-cni-297000: exit status 7 (34.697833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-297000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-297000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-297000 --alsologtostderr -v=1: exit status 83 (45.787667ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-297000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-297000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 18:56:17.672572    6593 out.go:345] Setting OutFile to fd 1 ...
	I1025 18:56:17.672755    6593 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:56:17.672758    6593 out.go:358] Setting ErrFile to fd 2...
	I1025 18:56:17.672761    6593 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 18:56:17.672881    6593 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 18:56:17.673110    6593 out.go:352] Setting JSON to false
	I1025 18:56:17.673117    6593 mustload.go:65] Loading cluster: newest-cni-297000
	I1025 18:56:17.674091    6593 config.go:182] Loaded profile config "newest-cni-297000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 18:56:17.678679    6593 out.go:177] * The control-plane node newest-cni-297000 host is not running: state=Stopped
	I1025 18:56:17.682630    6593 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-297000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-297000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-297000 -n newest-cni-297000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-297000 -n newest-cni-297000: exit status 7 (34.609042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-297000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-297000 -n newest-cni-297000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-297000 -n newest-cni-297000: exit status 7 (34.846084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-297000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                    

Test pass (153/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.2/json-events 10.62
13 TestDownloadOnly/v1.31.2/preload-exists 0
16 TestDownloadOnly/v1.31.2/kubectl 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.08
18 TestDownloadOnly/v1.31.2/DeleteAll 0.11
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.31
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 198.06
29 TestAddons/serial/Volcano 39.05
31 TestAddons/serial/GCPAuth/Namespaces 0.08
32 TestAddons/serial/GCPAuth/FakeCredentials 9.39
35 TestAddons/parallel/Registry 14.53
36 TestAddons/parallel/Ingress 16.56
37 TestAddons/parallel/InspektorGadget 10.3
38 TestAddons/parallel/MetricsServer 5.31
40 TestAddons/parallel/CSI 41.7
41 TestAddons/parallel/Headlamp 17.61
42 TestAddons/parallel/CloudSpanner 6.19
43 TestAddons/parallel/LocalPath 40.97
44 TestAddons/parallel/NvidiaDevicePlugin 6.19
45 TestAddons/parallel/Yakd 10.25
47 TestAddons/StoppedEnableDisable 12.44
55 TestHyperKitDriverInstallOrUpdate 11.96
58 TestErrorSpam/setup 33.91
59 TestErrorSpam/start 0.36
60 TestErrorSpam/status 0.25
61 TestErrorSpam/pause 0.72
62 TestErrorSpam/unpause 0.64
63 TestErrorSpam/stop 55.27
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 49.39
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 39.11
70 TestFunctional/serial/KubeContext 0.03
71 TestFunctional/serial/KubectlGetPods 0.04
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.22
75 TestFunctional/serial/CacheCmd/cache/add_local 1.16
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
77 TestFunctional/serial/CacheCmd/cache/list 0.04
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
79 TestFunctional/serial/CacheCmd/cache/cache_reload 0.72
80 TestFunctional/serial/CacheCmd/cache/delete 0.08
81 TestFunctional/serial/MinikubeKubectlCmd 0.79
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.17
83 TestFunctional/serial/ExtraConfig 36.88
84 TestFunctional/serial/ComponentHealth 0.04
85 TestFunctional/serial/LogsCmd 0.63
86 TestFunctional/serial/LogsFileCmd 0.61
87 TestFunctional/serial/InvalidService 3.63
89 TestFunctional/parallel/ConfigCmd 0.24
90 TestFunctional/parallel/DashboardCmd 8.9
91 TestFunctional/parallel/DryRun 0.24
92 TestFunctional/parallel/InternationalLanguage 0.12
93 TestFunctional/parallel/StatusCmd 0.25
98 TestFunctional/parallel/AddonsCmd 0.11
99 TestFunctional/parallel/PersistentVolumeClaim 24.9
101 TestFunctional/parallel/SSHCmd 0.13
102 TestFunctional/parallel/CpCmd 0.42
104 TestFunctional/parallel/FileSync 0.07
105 TestFunctional/parallel/CertSync 0.49
109 TestFunctional/parallel/NodeLabels 0.04
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.12
113 TestFunctional/parallel/License 0.28
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.95
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.03
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.1
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
121 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
123 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
125 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
126 TestFunctional/parallel/ServiceCmd/List 0.32
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
129 TestFunctional/parallel/ServiceCmd/Format 0.1
130 TestFunctional/parallel/ServiceCmd/URL 0.1
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.15
132 TestFunctional/parallel/ProfileCmd/profile_list 0.14
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.14
134 TestFunctional/parallel/MountCmd/any-port 5.26
135 TestFunctional/parallel/MountCmd/specific-port 1.19
136 TestFunctional/parallel/MountCmd/VerifyCleanup 1.8
137 TestFunctional/parallel/Version/short 0.05
138 TestFunctional/parallel/Version/components 0.21
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
143 TestFunctional/parallel/ImageCommands/ImageBuild 1.86
144 TestFunctional/parallel/ImageCommands/Setup 1.64
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.68
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
148 TestFunctional/parallel/DockerEnv/bash 0.31
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.07
152 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.16
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.18
156 TestFunctional/delete_echo-server_images 0.03
157 TestFunctional/delete_my-image_image 0.01
158 TestFunctional/delete_minikube_cached_images 0.01
168 TestMultiControlPlane/serial/CopyFile 0.04
176 TestImageBuild/serial/Setup 34.79
177 TestImageBuild/serial/NormalBuild 1.36
178 TestImageBuild/serial/BuildWithBuildArg 0.41
179 TestImageBuild/serial/BuildWithDockerIgnore 0.31
180 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.33
185 TestJSONOutput/start/Audit 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 6.66
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.22
212 TestMainNoArgs 0.04
213 TestMinikubeProfile 70.8
259 TestStoppedBinaryUpgrade/Setup 1.01
271 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
275 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
276 TestNoKubernetes/serial/ProfileList 31.18
277 TestNoKubernetes/serial/Stop 2.07
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
289 TestStoppedBinaryUpgrade/MinikubeLogs 0.76
294 TestStartStop/group/old-k8s-version/serial/Stop 3.44
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
305 TestStartStop/group/no-preload/serial/Stop 2.64
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
318 TestStartStop/group/embed-certs/serial/Stop 2.01
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.14
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.95
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
338 TestStartStop/group/newest-cni/serial/Stop 1.81
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1025 17:42:54.381216    1672 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1025 17:42:54.381623    1672 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-797000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-797000: exit status 85 (100.551542ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-797000 | jenkins | v1.34.0 | 25 Oct 24 17:42 PDT |          |
	|         | -p download-only-797000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/25 17:42:39
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 17:42:39.571798    1673 out.go:345] Setting OutFile to fd 1 ...
	I1025 17:42:39.571965    1673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 17:42:39.571969    1673 out.go:358] Setting ErrFile to fd 2...
	I1025 17:42:39.571971    1673 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 17:42:39.572085    1673 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	W1025 17:42:39.572175    1673 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19868-1112/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19868-1112/.minikube/config/config.json: no such file or directory
	I1025 17:42:39.573560    1673 out.go:352] Setting JSON to true
	I1025 17:42:39.592908    1673 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":730,"bootTime":1729902629,"procs":559,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 17:42:39.592980    1673 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 17:42:39.597472    1673 out.go:97] [download-only-797000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 17:42:39.597613    1673 notify.go:220] Checking for updates...
	W1025 17:42:39.597653    1673 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 17:42:39.600412    1673 out.go:169] MINIKUBE_LOCATION=19868
	I1025 17:42:39.605317    1673 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 17:42:39.609481    1673 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 17:42:39.612450    1673 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 17:42:39.613972    1673 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	W1025 17:42:39.620463    1673 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 17:42:39.620698    1673 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 17:42:39.624432    1673 out.go:97] Using the qemu2 driver based on user configuration
	I1025 17:42:39.624452    1673 start.go:297] selected driver: qemu2
	I1025 17:42:39.624466    1673 start.go:901] validating driver "qemu2" against <nil>
	I1025 17:42:39.624525    1673 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 17:42:39.628423    1673 out.go:169] Automatically selected the socket_vmnet network
	I1025 17:42:39.635424    1673 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1025 17:42:39.635534    1673 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 17:42:39.635596    1673 cni.go:84] Creating CNI manager for ""
	I1025 17:42:39.635641    1673 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1025 17:42:39.635700    1673 start.go:340] cluster config:
	{Name:download-only-797000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-797000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 17:42:39.640205    1673 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 17:42:39.643371    1673 out.go:97] Downloading VM boot image ...
	I1025 17:42:39.643409    1673 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/iso/arm64/minikube-v1.34.0-1729002252-19806-arm64.iso
	I1025 17:42:45.919368    1673 out.go:97] Starting "download-only-797000" primary control-plane node in "download-only-797000" cluster
	I1025 17:42:45.919411    1673 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1025 17:42:45.977095    1673 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1025 17:42:45.977117    1673 cache.go:56] Caching tarball of preloaded images
	I1025 17:42:45.977316    1673 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1025 17:42:45.981410    1673 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1025 17:42:45.981416    1673 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1025 17:42:46.062299    1673 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1025 17:42:53.053641    1673 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1025 17:42:53.053803    1673 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1025 17:42:53.765875    1673 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1025 17:42:53.766083    1673 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/download-only-797000/config.json ...
	I1025 17:42:53.766100    1673 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/download-only-797000/config.json: {Name:mk10c10bf644c4c9b3237622517f91c78f3b9cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:42:53.766364    1673 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1025 17:42:53.766614    1673 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1025 17:42:54.334703    1673 out.go:193] 
	W1025 17:42:54.338897    1673 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19868-1112/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1073b5320 0x1073b5320 0x1073b5320 0x1073b5320 0x1073b5320 0x1073b5320 0x1073b5320] Decompressors:map[bz2:0x14000886210 gz:0x14000886218 tar:0x140008861c0 tar.bz2:0x140008861d0 tar.gz:0x140008861e0 tar.xz:0x140008861f0 tar.zst:0x14000886200 tbz2:0x140008861d0 tgz:0x140008861e0 txz:0x140008861f0 tzst:0x14000886200 xz:0x14000886220 zip:0x14000886230 zst:0x14000886228] Getters:map[file:0x14000ac2870 http:0x140008740a0 https:0x140008740f0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1025 17:42:54.338925    1673 out_reason.go:110] 
	W1025 17:42:54.346861    1673 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 17:42:54.350724    1673 out.go:193] 
	
	
	* The control-plane node download-only-797000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-797000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-797000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (10.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-477000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-477000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 : (10.615578291s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (10.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1025 17:43:05.372232    1672 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1025 17:43:05.372282    1672 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
--- PASS: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-477000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-477000: exit status 85 (78.461709ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-797000 | jenkins | v1.34.0 | 25 Oct 24 17:42 PDT |                     |
	|         | -p download-only-797000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 25 Oct 24 17:42 PDT | 25 Oct 24 17:42 PDT |
	| delete  | -p download-only-797000        | download-only-797000 | jenkins | v1.34.0 | 25 Oct 24 17:42 PDT | 25 Oct 24 17:42 PDT |
	| start   | -o=json --download-only        | download-only-477000 | jenkins | v1.34.0 | 25 Oct 24 17:42 PDT |                     |
	|         | -p download-only-477000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/25 17:42:54
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 17:42:54.787418    1699 out.go:345] Setting OutFile to fd 1 ...
	I1025 17:42:54.787570    1699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 17:42:54.787573    1699 out.go:358] Setting ErrFile to fd 2...
	I1025 17:42:54.787577    1699 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 17:42:54.787695    1699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 17:42:54.788858    1699 out.go:352] Setting JSON to true
	I1025 17:42:54.806594    1699 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":745,"bootTime":1729902629,"procs":546,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 17:42:54.806673    1699 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 17:42:54.811526    1699 out.go:97] [download-only-477000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 17:42:54.811609    1699 notify.go:220] Checking for updates...
	I1025 17:42:54.815731    1699 out.go:169] MINIKUBE_LOCATION=19868
	I1025 17:42:54.818726    1699 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 17:42:54.822697    1699 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 17:42:54.825725    1699 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 17:42:54.828702    1699 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	W1025 17:42:54.834699    1699 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 17:42:54.834854    1699 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 17:42:54.837689    1699 out.go:97] Using the qemu2 driver based on user configuration
	I1025 17:42:54.837698    1699 start.go:297] selected driver: qemu2
	I1025 17:42:54.837701    1699 start.go:901] validating driver "qemu2" against <nil>
	I1025 17:42:54.837749    1699 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1025 17:42:54.839119    1699 out.go:169] Automatically selected the socket_vmnet network
	I1025 17:42:54.843961    1699 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1025 17:42:54.844056    1699 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 17:42:54.844077    1699 cni.go:84] Creating CNI manager for ""
	I1025 17:42:54.844106    1699 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1025 17:42:54.844111    1699 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1025 17:42:54.844161    1699 start.go:340] cluster config:
	{Name:download-only-477000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-477000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 17:42:54.848535    1699 iso.go:125] acquiring lock: {Name:mkfa069328e3a91188771473d7a94cf2fefbeacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 17:42:54.851695    1699 out.go:97] Starting "download-only-477000" primary control-plane node in "download-only-477000" cluster
	I1025 17:42:54.851702    1699 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 17:42:54.915107    1699 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 17:42:54.915117    1699 cache.go:56] Caching tarball of preloaded images
	I1025 17:42:54.915320    1699 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 17:42:54.920690    1699 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1025 17:42:54.920697    1699 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1025 17:42:55.001022    1699 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4?checksum=md5:5f3d7369b12f47138aa2863bb7bda916 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1025 17:43:02.753480    1699 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1025 17:43:02.753643    1699 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1025 17:43:03.285054    1699 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1025 17:43:03.285239    1699 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/download-only-477000/config.json ...
	I1025 17:43:03.285254    1699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/download-only-477000/config.json: {Name:mk0be6c36d0cea7990e19554f4f33825552a2e93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 17:43:03.285523    1699 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1025 17:43:03.285668    1699 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19868-1112/.minikube/cache/darwin/arm64/v1.31.2/kubectl
	
	
	* The control-plane node download-only-477000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-477000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-477000
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.31s)

                                                
                                                
=== RUN   TestBinaryMirror
I1025 17:43:05.896538    1672 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-003000 --alsologtostderr --binary-mirror http://127.0.0.1:49313 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-003000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-003000
--- PASS: TestBinaryMirror (0.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-521000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-521000: exit status 85 (63.514ms)

                                                
                                                
-- stdout --
	* Profile "addons-521000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-521000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-521000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-521000: exit status 85 (67.294792ms)

                                                
                                                
-- stdout --
	* Profile "addons-521000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-521000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (198.06s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-521000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-darwin-arm64 start -p addons-521000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m18.058648334s)
--- PASS: TestAddons/Setup (198.06s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.05s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 6.953583ms
addons_test.go:815: volcano-admission stabilized in 7.054917ms
addons_test.go:807: volcano-scheduler stabilized in 7.138125ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-6jljq" [ac50b703-f9f6-4127-9ffd-4d8afcafb9bd] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.009059042s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-htxdl" [097be7a3-5178-430a-ab69-80b129b0855a] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004792s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-hczd5" [84546511-5cd9-4846-b2b2-6ab781d17973] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.007220375s
addons_test.go:842: (dbg) Run:  kubectl --context addons-521000 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-521000 create -f testdata/vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-521000 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [3f1e5a1a-9018-406f-9281-50e932591bd1] Pending
helpers_test.go:344: "test-job-nginx-0" [3f1e5a1a-9018-406f-9281-50e932591bd1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [3f1e5a1a-9018-406f-9281-50e932591bd1] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.005379666s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-521000 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-521000 addons disable volcano --alsologtostderr -v=1: (10.8035085s)
--- PASS: TestAddons/serial/Volcano (39.05s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-521000 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-521000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.08s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.39s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-521000 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-521000 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ab9d55dd-c1a6-42bc-87ff-9bb8771082b6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ab9d55dd-c1a6-42bc-87ff-9bb8771082b6] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004926041s
addons_test.go:633: (dbg) Run:  kubectl --context addons-521000 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-521000 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-521000 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-521000 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.39s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 1.352542ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-f2qql" [3dbf0bdd-ff8c-4e4e-8fd7-5bb333398b90] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0057425s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qlnph" [2fc326f9-f569-4b72-aff0-c5f749839064] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007789958s
addons_test.go:331: (dbg) Run:  kubectl --context addons-521000 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-521000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-521000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.188009625s)
addons_test.go:350: (dbg) Run:  out/minikube-darwin-arm64 -p addons-521000 ip
2024/10/25 17:47:36 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-521000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (16.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-521000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-521000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-521000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [940a2680-a5b9-4120-a7a6-be5526e3beb7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [940a2680-a5b9-4120-a7a6-be5526e3beb7] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.009851084s
I1025 17:48:45.393673    1672 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-darwin-arm64 -p addons-521000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-521000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-arm64 -p addons-521000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-521000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-521000 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-521000 addons disable ingress --alsologtostderr -v=1: (7.271576583s)
--- PASS: TestAddons/parallel/Ingress (16.56s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.3s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vhrmv" [5e3263b2-2d9d-4a2b-9dac-6decc02db6d3] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.008144833s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-521000 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-521000 addons disable inspektor-gadget --alsologtostderr -v=1: (5.294084375s)
--- PASS: TestAddons/parallel/InspektorGadget (10.30s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 1.409667ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-dk7ct" [5c837599-8dc0-4b2e-ba7e-0cff574cc201] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.012003125s
addons_test.go:402: (dbg) Run:  kubectl --context addons-521000 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-521000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1025 17:47:58.248769    1672 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1025 17:47:58.251587    1672 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1025 17:47:58.251598    1672 kapi.go:107] duration metric: took 2.851833ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 2.859417ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-521000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-521000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9c68eaf4-2227-49c3-b5ea-b455b74216ed] Pending
helpers_test.go:344: "task-pv-pod" [9c68eaf4-2227-49c3-b5ea-b455b74216ed] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9c68eaf4-2227-49c3-b5ea-b455b74216ed] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.011164292s
addons_test.go:511: (dbg) Run:  kubectl --context addons-521000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-521000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-521000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-521000 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-521000 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-521000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-521000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e78d05a0-7665-4979-9173-8e707f7a344a] Pending
helpers_test.go:344: "task-pv-pod-restore" [e78d05a0-7665-4979-9173-8e707f7a344a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e78d05a0-7665-4979-9173-8e707f7a344a] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.005497666s
addons_test.go:553: (dbg) Run:  kubectl --context addons-521000 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-521000 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-521000 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-521000 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-521000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-521000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.130190042s)
--- PASS: TestAddons/parallel/CSI (41.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-521000 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-86hr5" [8d8b803c-cf08-444b-af2c-5739e70283c1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-86hr5" [8d8b803c-cf08-444b-af2c-5739e70283c1] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003281791s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-521000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-521000 addons disable headlamp --alsologtostderr -v=1: (5.247648166s)
--- PASS: TestAddons/parallel/Headlamp (17.61s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-vc49m" [4749e3a4-8a46-4187-b251-3bd769d4c846] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005347792s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-521000 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.19s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (40.97s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-521000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-521000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-521000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [309c170b-3590-4dde-aec9-35745efa27ec] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [309c170b-3590-4dde-aec9-35745efa27ec] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [309c170b-3590-4dde-aec9-35745efa27ec] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003835458s
addons_test.go:906: (dbg) Run:  kubectl --context addons-521000 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-darwin-arm64 -p addons-521000 ssh "cat /opt/local-path-provisioner/pvc-814243da-eddf-465c-b9fa-f63ab0041fd0_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-521000 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-521000 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-521000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-521000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (31.458075375s)
--- PASS: TestAddons/parallel/LocalPath (40.97s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.19s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6c8rc" [07d9add8-db41-4713-ade2-785eddb196ae] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.008625s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-521000 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.19s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-zt2fb" [2ae6d9ea-5405-4440-bd36-68a99e0eb429] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.00476775s
addons_test.go:992: (dbg) Run:  out/minikube-darwin-arm64 -p addons-521000 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-darwin-arm64 -p addons-521000 addons disable yakd --alsologtostderr -v=1: (5.243209125s)
--- PASS: TestAddons/parallel/Yakd (10.25s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-521000
addons_test.go:170: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-521000: (12.236041291s)
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-521000
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-521000
addons_test.go:183: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-521000
--- PASS: TestAddons/StoppedEnableDisable (12.44s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.96s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1025 18:41:33.173604    1672 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1025 18:41:33.173825    1672 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1025 18:41:35.833913    1672 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1025 18:41:35.834147    1672 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1025 18:41:35.834203    1672 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate220630889/001/docker-machine-driver-hyperkit
I1025 18:41:36.332945    1672 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate220630889/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x104d0e6e0 0x104d0e6e0 0x104d0e6e0 0x104d0e6e0 0x104d0e6e0 0x104d0e6e0 0x104d0e6e0] Decompressors:map[bz2:0x14000598468 gz:0x14000598520 tar:0x140005984c0 tar.bz2:0x140005984d0 tar.gz:0x140005984e0 tar.xz:0x140005984f0 tar.zst:0x14000598500 tbz2:0x140005984d0 tgz:0x140005984e0 txz:0x140005984f0 tzst:0x14000598500 xz:0x14000598528 zip:0x14000598530 zst:0x14000598550] Getters:map[file:0x140015cc080 http:0x140008ff7c0 https:0x140008ff810] Dir:false ProgressListener:<nil> Insecure:false DisableSym
links:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1025 18:41:36.332972    1672 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate220630889/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (11.96s)

                                                
                                    
x
+
TestErrorSpam/setup (33.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-092000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-092000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 --driver=qemu2 : (33.914470375s)
--- PASS: TestErrorSpam/setup (33.91s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-092000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-092000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-092000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-092000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-092000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-092000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-092000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-092000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-092000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 pause
--- PASS: TestErrorSpam/pause (0.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-092000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-092000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-092000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 unpause
--- PASS: TestErrorSpam/unpause (0.64s)

                                                
                                    
x
+
TestErrorSpam/stop (55.27s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-092000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-092000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 stop: (3.174158583s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-092000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-092000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 stop: (26.059570417s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-092000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-092000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-092000 stop: (26.038007667s)
--- PASS: TestErrorSpam/stop (55.27s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19868-1112/.minikube/files/etc/test/nested/copy/1672/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.39s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-701000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E1025 17:51:24.328881    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:51:24.335693    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:51:24.349148    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:51:24.372595    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:51:24.416022    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:51:24.499505    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:51:24.661366    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:51:24.985154    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:51:25.628925    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:51:26.912378    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-701000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (49.385496917s)
--- PASS: TestFunctional/serial/StartWithProxy (49.39s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.11s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1025 17:51:26.944414    1672 config.go:182] Loaded profile config "functional-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-701000 --alsologtostderr -v=8
E1025 17:51:29.476481    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:51:34.600262    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:51:44.843894    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
E1025 17:52:05.327549    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-701000 --alsologtostderr -v=8: (39.113769792s)
functional_test.go:663: soft start took 39.114153459s for "functional-701000" cluster.
I1025 17:52:06.057927    1672 config.go:182] Loaded profile config "functional-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (39.11s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-701000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-701000 cache add registry.k8s.io/pause:3.1: (1.189415583s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-701000 cache add registry.k8s.io/pause:3.3: (1.116513167s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-701000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local89352977/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 cache add minikube-local-cache-test:functional-701000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 cache delete minikube-local-cache-test:functional-701000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-701000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-701000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (72.654458ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 kubectl -- --context functional-701000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.79s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-701000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-701000 get pods: (1.166999166s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.88s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-701000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1025 17:52:46.291151    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/addons-521000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-701000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.87486725s)
functional_test.go:761: restart took 36.874956917s for "functional-701000" cluster.
I1025 17:52:50.300754    1672 config.go:182] Loaded profile config "functional-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (36.88s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-701000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.63s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1029274984/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.63s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-701000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-701000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-701000: exit status 115 (131.738417ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30111 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-701000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.63s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-701000 config get cpus: exit status 14 (35.31ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-701000 config get cpus: exit status 14 (37.310167ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-701000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-701000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2300: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.90s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-701000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-701000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (122.715917ms)

                                                
                                                
-- stdout --
	* [functional-701000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 17:53:36.651872    2287 out.go:345] Setting OutFile to fd 1 ...
	I1025 17:53:36.652050    2287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 17:53:36.652053    2287 out.go:358] Setting ErrFile to fd 2...
	I1025 17:53:36.652056    2287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 17:53:36.652196    2287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 17:53:36.653314    2287 out.go:352] Setting JSON to false
	I1025 17:53:36.671880    2287 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1387,"bootTime":1729902629,"procs":550,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 17:53:36.671987    2287 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 17:53:36.675613    2287 out.go:177] * [functional-701000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1025 17:53:36.683534    2287 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 17:53:36.683580    2287 notify.go:220] Checking for updates...
	I1025 17:53:36.690526    2287 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 17:53:36.693567    2287 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 17:53:36.696577    2287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 17:53:36.699503    2287 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 17:53:36.702603    2287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 17:53:36.705974    2287 config.go:182] Loaded profile config "functional-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 17:53:36.706252    2287 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 17:53:36.710522    2287 out.go:177] * Using the qemu2 driver based on existing profile
	I1025 17:53:36.717572    2287 start.go:297] selected driver: qemu2
	I1025 17:53:36.717578    2287 start.go:901] validating driver "qemu2" against &{Name:functional-701000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-701000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 17:53:36.717636    2287 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 17:53:36.723592    2287 out.go:201] 
	W1025 17:53:36.727574    2287 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 17:53:36.731381    2287 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-701000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-701000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-701000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (118.688292ms)

                                                
                                                
-- stdout --
	* [functional-701000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 17:53:36.525771    2283 out.go:345] Setting OutFile to fd 1 ...
	I1025 17:53:36.526062    2283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 17:53:36.526069    2283 out.go:358] Setting ErrFile to fd 2...
	I1025 17:53:36.526072    2283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1025 17:53:36.526255    2283 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
	I1025 17:53:36.527962    2283 out.go:352] Setting JSON to false
	I1025 17:53:36.547642    2283 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1387,"bootTime":1729902629,"procs":550,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1025 17:53:36.547773    2283 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1025 17:53:36.553706    2283 out.go:177] * [functional-701000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1025 17:53:36.561672    2283 out.go:177]   - MINIKUBE_LOCATION=19868
	I1025 17:53:36.561735    2283 notify.go:220] Checking for updates...
	I1025 17:53:36.568537    2283 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	I1025 17:53:36.569820    2283 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1025 17:53:36.572573    2283 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 17:53:36.575614    2283 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	I1025 17:53:36.578567    2283 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 17:53:36.581892    2283 config.go:182] Loaded profile config "functional-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1025 17:53:36.582164    2283 driver.go:394] Setting default libvirt URI to qemu:///system
	I1025 17:53:36.586545    2283 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1025 17:53:36.593556    2283 start.go:297] selected driver: qemu2
	I1025 17:53:36.593562    2283 start.go:901] validating driver "qemu2" against &{Name:functional-701000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19806/minikube-v1.34.0-1729002252-19806-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-701000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 17:53:36.593616    2283 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 17:53:36.600575    2283 out.go:201] 
	W1025 17:53:36.604505    2283 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 17:53:36.608495    2283 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [997ede19-c559-4785-a5a5-ddbf1a941f3d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008817s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-701000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-701000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-701000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-701000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [92cf7bb4-e104-4e03-a9b2-304f750964ef] Pending
helpers_test.go:344: "sp-pod" [92cf7bb4-e104-4e03-a9b2-304f750964ef] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [92cf7bb4-e104-4e03-a9b2-304f750964ef] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.006142167s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-701000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-701000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-701000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [09e91d86-4566-4e56-ae2c-e17b9cdf083e] Pending
helpers_test.go:344: "sp-pod" [09e91d86-4566-4e56-ae2c-e17b9cdf083e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [09e91d86-4566-4e56-ae2c-e17b9cdf083e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.01094275s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-701000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.90s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh -n functional-701000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 cp functional-701000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3709829099/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh -n functional-701000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh -n functional-701000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1672/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "sudo cat /etc/test/nested/copy/1672/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1672.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "sudo cat /etc/ssl/certs/1672.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1672.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "sudo cat /usr/share/ca-certificates/1672.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/16722.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "sudo cat /etc/ssl/certs/16722.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/16722.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "sudo cat /usr/share/ca-certificates/16722.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-701000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-701000 ssh "sudo systemctl is-active crio": exit status 1 (122.890708ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-701000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-701000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-701000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2146: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-701000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-701000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-701000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [988d2c01-af73-4936-8a90-dea5077974bc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [988d2c01-af73-4936-8a90-dea5077974bc] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00617025s
I1025 17:53:06.697310    1672 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-701000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.26.84 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1025 17:53:06.785767    1672 config.go:182] Loaded profile config "functional-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1025 17:53:06.830257    1672 config.go:182] Loaded profile config "functional-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-701000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-701000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-701000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-wvcbj" [c5e26f91-156d-4a4b-acc0-c85a94daf882] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-wvcbj" [c5e26f91-156d-4a4b-acc0-c85a94daf882] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.01019825s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 service list -o json
functional_test.go:1494: Took "291.668292ms" to run "out/minikube-darwin-arm64 -p functional-701000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:32693
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:32693
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "99.962042ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "39.849625ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "103.115333ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "39.729084ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-701000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1690203057/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1729904007990167000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1690203057/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1729904007990167000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1690203057/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1729904007990167000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1690203057/001/test-1729904007990167000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-701000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (61.606292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 17:53:28.052281    1672 retry.go:31] will retry after 518.100535ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 26 00:53 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 26 00:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 26 00:53 test-1729904007990167000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh cat /mount-9p/test-1729904007990167000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-701000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e64d9070-5969-4f5f-91de-742f9e62e489] Pending
helpers_test.go:344: "busybox-mount" [e64d9070-5969-4f5f-91de-742f9e62e489] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e64d9070-5969-4f5f-91de-742f9e62e489] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e64d9070-5969-4f5f-91de-742f9e62e489] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00367775s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-701000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-701000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1690203057/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-701000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3730951591/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-701000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (65.282042ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 17:53:33.320265    1672 retry.go:31] will retry after 695.628269ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-701000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3730951591/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-701000 ssh "sudo umount -f /mount-9p": exit status 1 (65.394458ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-701000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-701000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3730951591/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-701000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4221520768/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-701000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4221520768/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-701000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4221520768/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-701000 ssh "findmnt -T" /mount1: exit status 1 (75.730916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 17:53:34.528013    1672 retry.go:31] will retry after 316.060302ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-701000 ssh "findmnt -T" /mount1: exit status 1 (61.184375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 17:53:34.907436    1672 retry.go:31] will retry after 1.075493869s: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-701000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-701000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4221520768/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-701000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4221520768/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-701000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4221520768/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-701000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-701000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-701000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-701000 image ls --format short --alsologtostderr:
I1025 17:53:47.452521    2436 out.go:345] Setting OutFile to fd 1 ...
I1025 17:53:47.452698    2436 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 17:53:47.452703    2436 out.go:358] Setting ErrFile to fd 2...
I1025 17:53:47.452705    2436 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 17:53:47.452843    2436 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
I1025 17:53:47.453258    2436 config.go:182] Loaded profile config "functional-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1025 17:53:47.453330    2436 config.go:182] Loaded profile config "functional-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1025 17:53:47.454192    2436 ssh_runner.go:195] Run: systemctl --version
I1025 17:53:47.454203    2436 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/functional-701000/id_rsa Username:docker}
I1025 17:53:47.477063    2436 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-701000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.31.2           | f9c26480f1e72 | 91.6MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/minikube-local-cache-test | functional-701000 | 596a2da19d733 | 30B    |
| registry.k8s.io/kube-proxy                  | v1.31.2           | 021d242013305 | 94.7MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/kube-controller-manager     | v1.31.2           | 9404aea098d9e | 85.9MB |
| docker.io/library/nginx                     | latest            | 4b196525bd3cc | 197MB  |
| docker.io/library/nginx                     | alpine            | 577a23b5858b9 | 50.8MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kicbase/echo-server               | functional-701000 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.31.2           | d6b061e73ae45 | 66MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-701000 image ls --format table --alsologtostderr:
I1025 17:53:47.611571    2449 out.go:345] Setting OutFile to fd 1 ...
I1025 17:53:47.611755    2449 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 17:53:47.611758    2449 out.go:358] Setting ErrFile to fd 2...
I1025 17:53:47.611761    2449 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 17:53:47.611919    2449 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
I1025 17:53:47.612365    2449 config.go:182] Loaded profile config "functional-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1025 17:53:47.612433    2449 config.go:182] Loaded profile config "functional-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1025 17:53:47.613294    2449 ssh_runner.go:195] Run: systemctl --version
I1025 17:53:47.613303    2449 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/functional-701000/id_rsa Username:docker}
I1025 17:53:47.635620    2449 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-701000 image ls --format json --alsologtostderr:
[{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-701000"],"size":"4780000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f7
7206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"85900000"},{"id":"f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"91600000"},{"id":"d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"66000000"},{"id":"577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"50800000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/
dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"596a2da19d733eed24b1cf63e70d7812ec2f1bf1700a8be8a0c12d7d44d48e3a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-701000"],"size":"30"},{"id":"4b196525bd3cc6aa7a72ba63c6c2ae6d957b57edd603a7070c5e31f8e63c51f9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"197000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"si
ze":"94700000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-701000 image ls --format json --alsologtostderr:
I1025 17:53:47.532796    2444 out.go:345] Setting OutFile to fd 1 ...
I1025 17:53:47.532976    2444 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 17:53:47.532981    2444 out.go:358] Setting ErrFile to fd 2...
I1025 17:53:47.532983    2444 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 17:53:47.533129    2444 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
I1025 17:53:47.533586    2444 config.go:182] Loaded profile config "functional-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1025 17:53:47.533645    2444 config.go:182] Loaded profile config "functional-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1025 17:53:47.535386    2444 ssh_runner.go:195] Run: systemctl --version
I1025 17:53:47.535396    2444 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/functional-701000/id_rsa Username:docker}
I1025 17:53:47.558999    2444 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-701000 image ls --format yaml --alsologtostderr:
- id: 596a2da19d733eed24b1cf63e70d7812ec2f1bf1700a8be8a0c12d7d44d48e3a
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-701000
size: "30"
- id: 577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "50800000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "66000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-701000
size: "4780000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "85900000"
- id: 021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "94700000"
- id: 4b196525bd3cc6aa7a72ba63c6c2ae6d957b57edd603a7070c5e31f8e63c51f9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "197000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "91600000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-701000 image ls --format yaml --alsologtostderr:
I1025 17:53:47.452499    2437 out.go:345] Setting OutFile to fd 1 ...
I1025 17:53:47.452741    2437 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 17:53:47.452746    2437 out.go:358] Setting ErrFile to fd 2...
I1025 17:53:47.452749    2437 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 17:53:47.452875    2437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
I1025 17:53:47.453298    2437 config.go:182] Loaded profile config "functional-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1025 17:53:47.453362    2437 config.go:182] Loaded profile config "functional-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1025 17:53:47.454616    2437 ssh_runner.go:195] Run: systemctl --version
I1025 17:53:47.454623    2437 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/functional-701000/id_rsa Username:docker}
I1025 17:53:47.477062    2437 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-701000 ssh pgrep buildkitd: exit status 1 (66.76ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 image build -t localhost/my-image:functional-701000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-701000 image build -t localhost/my-image:functional-701000 testdata/build --alsologtostderr: (1.716808167s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-701000 image build -t localhost/my-image:functional-701000 testdata/build --alsologtostderr:
I1025 17:53:47.598043    2447 out.go:345] Setting OutFile to fd 1 ...
I1025 17:53:47.598316    2447 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 17:53:47.598321    2447 out.go:358] Setting ErrFile to fd 2...
I1025 17:53:47.598324    2447 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1025 17:53:47.598455    2447 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19868-1112/.minikube/bin
I1025 17:53:47.598905    2447 config.go:182] Loaded profile config "functional-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1025 17:53:47.599685    2447 config.go:182] Loaded profile config "functional-701000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1025 17:53:47.601112    2447 ssh_runner.go:195] Run: systemctl --version
I1025 17:53:47.601123    2447 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19868-1112/.minikube/machines/functional-701000/id_rsa Username:docker}
I1025 17:53:47.624363    2447 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1857195946.tar
I1025 17:53:47.624447    2447 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1025 17:53:47.628211    2447 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1857195946.tar
I1025 17:53:47.629639    2447 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1857195946.tar: stat -c "%s %y" /var/lib/minikube/build/build.1857195946.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1857195946.tar': No such file or directory
I1025 17:53:47.629654    2447 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1857195946.tar --> /var/lib/minikube/build/build.1857195946.tar (3072 bytes)
I1025 17:53:47.639852    2447 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1857195946
I1025 17:53:47.645333    2447 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1857195946 -xf /var/lib/minikube/build/build.1857195946.tar
I1025 17:53:47.648970    2447 docker.go:360] Building image: /var/lib/minikube/build/build.1857195946
I1025 17:53:47.649031    2447 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-701000 /var/lib/minikube/build/build.1857195946
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.1s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers done
#8 writing image sha256:5e60d1bc832d3d38cd61d50dffa7f768c7740d71e1fcbd5c5230f98939b63459 done
#8 naming to localhost/my-image:functional-701000 done
#8 DONE 0.0s
I1025 17:53:49.256393    2447 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-701000 /var/lib/minikube/build/build.1857195946: (1.607366458s)
I1025 17:53:49.256474    2447 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1857195946
I1025 17:53:49.260415    2447 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1857195946.tar
I1025 17:53:49.263702    2447 build_images.go:217] Built localhost/my-image:functional-701000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.1857195946.tar
I1025 17:53:49.263715    2447 build_images.go:133] succeeded building to: functional-701000
I1025 17:53:49.263718    2447 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.619161583s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-701000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 image load --daemon kicbase/echo-server:functional-701000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 image load --daemon kicbase/echo-server:functional-701000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 image ls
2024/10/25 17:53:45 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-701000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 image load --daemon kicbase/echo-server:functional-701000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-701000 docker-env) && out/minikube-darwin-arm64 status -p functional-701000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-701000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 image save kicbase/echo-server:functional-701000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 image rm kicbase/echo-server:functional-701000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-701000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-701000 image save --daemon kicbase/echo-server:functional-701000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-701000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.18s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-701000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-701000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-701000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-499000 status --output json -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/CopyFile (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (34.79s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-401000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-401000 --driver=qemu2 : (34.787532s)
--- PASS: TestImageBuild/serial/Setup (34.79s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.36s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-401000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-401000: (1.362235792s)
--- PASS: TestImageBuild/serial/NormalBuild (1.36s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-401000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.41s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.31s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-401000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.31s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-401000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.33s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.66s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-346000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-346000 --output=json --user=testUser: (6.66087325s)
--- PASS: TestJSONOutput/stop/Command (6.66s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-474000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-474000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (98.834ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3ce92d23-7800-4452-8758-f1059cec884a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-474000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"be46a6b4-f470-48d6-8471-eb62bcd9fdd2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19868"}}
	{"specversion":"1.0","id":"dca9b7fe-ede8-488d-8dc8-b9f2919da5c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig"}}
	{"specversion":"1.0","id":"4b88dd05-df9b-4ef1-9f4c-c8fe282c2eba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c87a20f2-9ba7-48f5-b48d-7466d9404dbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"990bf039-c9fc-4968-8054-119ba93a72fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube"}}
	{"specversion":"1.0","id":"3a0b3711-a6ed-4f21-9801-650a25eb62db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"109b8f82-e39b-4744-9e46-34f060742dd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-474000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-474000
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (70.8s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-329000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-329000 --driver=qemu2 : (34.750536791s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-331000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-331000 --driver=qemu2 : (35.349907875s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-329000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-331000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-331000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-331000
helpers_test.go:175: Cleaning up "first-329000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-329000
--- PASS: TestMinikubeProfile (70.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-240000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-240000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (122.798791ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-240000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19868
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19868-1112/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19868-1112/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-240000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-240000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.642459ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-240000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-240000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
E1025 18:52:55.667543    1672 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19868-1112/.minikube/profiles/functional-701000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.609453916s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.56794175s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-240000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-240000: (2.071902333s)
--- PASS: TestNoKubernetes/serial/Stop (2.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-240000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-240000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.720375ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-240000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-240000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-473000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-825000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-825000 --alsologtostderr -v=3: (3.441642375s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-825000 -n old-k8s-version-825000: exit status 7 (55.408542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-825000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-188000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-188000 --alsologtostderr -v=3: (2.638786709s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-188000 -n no-preload-188000: exit status 7 (58.400166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-188000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-710000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-710000 --alsologtostderr -v=3: (2.008631208s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-710000 -n embed-certs-710000: exit status 7 (61.060667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-710000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-332000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-332000 --alsologtostderr -v=3: (1.952883167s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-332000 -n default-k8s-diff-port-332000: exit status 7 (63.029625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-332000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-297000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-297000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-297000 --alsologtostderr -v=3: (1.805842667s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.81s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-297000 -n newest-cni-297000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-297000 -n newest-cni-297000: exit status 7 (67.901458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-297000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-660000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-660000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-660000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-660000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-660000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-660000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-660000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-660000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-660000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-660000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-660000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-660000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-660000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-660000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-660000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-660000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-660000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-660000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-660000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-660000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-660000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-660000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-660000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-660000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-660000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-660000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-660000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-660000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-660000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-660000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-660000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-660000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-660000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-660000"

                                                
                                                
----------------------- debugLogs end: cilium-660000 [took: 2.347740542s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-660000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-660000
--- SKIP: TestNetworkPlugins/group/cilium (2.47s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-470000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-470000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard